3  International Large-Scale Asessments and Digital Issues

3.1 Why to work with International Large-Scale Assesments?

The digital self-efficacy agenda that has opened up with the discussions presented before have been measured with different approaches. On the one hand, it has diversified into case studies or small-scale comparative research. On the other hand, there are antecedents that deal with the problem from experiments and qualitative techniques. However, the largest dossier on this agenda is found in large-scale educational issues studies with a survey approach.

In recent years, International Large-scale Assessments1 have become one of the most relevant types of studies in the field of education, thanks to the large amount of data they collect, the versatility in how the data can be treated, and, consequently, the diverse contributions they can provide to the regions.

The birth of ILSA dates to 1958, when a group of researchers at the UNESCO Institute of Education were very interested in studying educational achievement and its determinants in different countries, intending to enable countries to learn from the experience of others and thus avoid decisions that would produce undesirable results (Husén, 1979). This led to the creation of the International Association for the Evaluation of Educational Achievement2. Since then, the IEA has conducted ILSAs periodically. Likewise, the Organization for Economic Cooperation and Development 3 has also been a highly relevant organization in promoting the implementation of this type of study, although its inauguration in this field was only in 2000.

ILSAs are characterized by the deployment of their surveys throughout the world, covering various countries and regions. When their results are published, they are usually illustrated in a ranking format, which opens the way to comparisons between the educational systems that were part of such studies. Therefore, the ILSAs are seriously considered in the field of both research and public policy, since those countries that did not obtain good results look at those that scored higher to develop strategies to improve the country in the relevant area. However, there are criticisms in relation to the above, because there could be situations in which a government decides to implement a policy identically inspired by another country, without considering the particularities of its region, which would lead to a probable failure of the project (Johansson, 2016).

On the other hand, ILSAs are usually framed around specific themes, but considering the effects that contextual factors at the country, school, classroom and student level may have on achievement. For this reason, the questionnaires are not only directed at students, but there may also be some that seek to gather information from the school, teachers or family of the main respondents. Because of this, ILSA data are rich in terms of the uses and treatments that can be given to them. From the data structure, statistical analysis techniques of some complexity such as multilevel models or structural equation models, to name a few, could be employed.

3.2 ILSAs and Digital Issues

In recent years, Information and Communication Technologies4 have become vitally relevant to life in society as they are increasingly present in all spheres of existence, due to the dizzying pace of technological development. In this context, a number of questions have arisen as to how young people relate to technology, and whether they are really prepared to cope in an increasingly digitized world.

In this context, the use of new technologies in education is a topic in vogue, due to the potential benefits that their implementation could bring in this area, such as learning flexibility, the creation of new learning and interaction environments, and the transformation of the traditional training scenario, among many others (Cabero Almenara & Llorente Cejudo, 2008). However, if these technologies are not democratized in terms of access and knowledge for proper use, societies could be more vulnerable to fragmentation in the digital environment (Trucco & Sunkel, 2010).

Thus, digital competence is seen as a common objective on the horizon by a vast number of countries, which is reflected in the numerous initiatives that have emerged, especially since the beginning of the new millennium, to promote this type of literacy among citizens. Milestones such as the World Summit on the Information Society held in 2003 and 2005, or the i2010 plan carried out by the European Union, have set the tone for joint efforts among countries to promote advances in societies in terms of technology, seeking to promote access to it and the knowledge necessary for citizens to take advantage of its benefits (Echeverría Ezponda, 2010).

This growing relevance of digital competence in society has caused that in recent years the digital issue has become an important part of the ILSA agenda, including questions of this type in their questionnaires, in order to elucidate the conditions in which young people find themselves with respect to their knowledge and skills in digital technologies. In this context, the attitudes and dispositions that young people have towards ICTs have become very important, highlighting self-efficacy in this area.

It becomes necessary to understand the current context in relation to emerging technologies and the skills to benefit from them. At a general level, the ILSAs have established a prioritization of this, which is highly valuable, since, thanks to the global scope of these studies, a panorama could be visualized at both national and international levels, which would consequently lead to the planning of strategies aimed at addressing the weakest points identified thanks to the ILSAs.

3.3 ICILS, TIMSS and PISA

Within ILSAs, there are two major organizations that have promoted this type of study: the IEA and the OECD. On the one hand, the IEA is the association in charge of leading and conducting two of the most important large-scale assessment studies: International Computer and Information Literacy Study 5 and Trends in International Mathematics and Science Study 6. On the other hand, the OECD is the orchestrator of the Programme for International Student Assessment 7, one of the leading ILSAs in the world. These three studies share the same target population in that they focus on adolescents, specifically young people between the ages of 13 and 15. More importantly, all of the aforementioned studies contain a digital self-efficacy battery, so the following paragraphs will seek to describe at a general level each of the ILSAs, and then to show how they address digital self-efficacy in their evaluative frameworks. Subsequently, a comparative analysis will be made between the digital self-efficacy batteries between cycles of the same study, as well as between different studies, to finally propose a discussion on the successes and failures in the different measurements that have been proposed regarding digital self-efficacy.

ICILS is a study on digital literacy, which seeks to answer the question: How well are students prepared to study, work and live in a digital world? To this end, the study measures achievement in computer and information literacy (CIL), a concept defined as “an individual’s ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace, and in society” (Julian Fraillon et al., 2013, p. 17). It is worth mentioning that this concept is operationalized by ICILS in order to measure digital literacy. The study deploys a complex sample design involving multistage, stratified and cluster sampling techniques (see p.59, Julian Fraillon et al., 2020).

The first ICILS study cycle was conducted in 2013. It involved 22 educational systems, and was the inaugural milestone of the study that seeks to measure achievement in CIL see on official website. Subsequently, the second cycle of the study occupied 2018, covering only 13 countries, but Computational Thinking domain was added as one of the key aspects to be studied. Currently, the report of the results of the third cycle of the study was recently published, which was conducted in 2023, while the databases will be officially released in March of this year.

The second study to be analyzed in this paper is TIMSS, which is conducted by the IEA and PIRLS International Study Center at Boston College’s Lynch School of Education and Human Development. This ILSA, as its name implies, focuses on assessing the performance of fourth and eighth grade students in mathematics and science. In addition, it contains questions related to the students’ context.

The first TIMSS study was carried out in 1995 and has been conducted periodically every 4 years without fail. Regarding the most recent cycles, it is worth mentioning that TIMMS 2019 was attended by 72 educational systems, while the subsequent cycle carried out in 2023 repeated the same number of participants see on official website. The results of the eighth and final cycle of the study have recently been published, and the next cycle, which is scheduled for 2027, is already in sight. It is important to mention that the sample design of this study is based on a two-stage stratified random sampling, with the sample of schools as the first stage and the selection of the classes of students in each school as the second stage (see p. 3.1, Siegel, P. & Foy, P., 2024)

Regarding the approach to technology in their questionnaires, in the 2011 cycle we found for the first time the presence of questions on this topic, although they only referred to the frequency of computer use at home and at school.

Finally, there is PISA, a study that is organized and executed by the OECD. PISA is characterized by measuring the abilities of 15-year-old adolescents to use their knowledge in reading, mathematics and science to face challenges in real life. This ILSA stands out for the great thematic versatility of its questionnaires. For example, the 2022 cycle contained 7 surveys, which dealt with topics such as financial literacy or good living. In addition, in each cycle of the study, one domain area takes center stage. Particularly, in 2018, the area of reading predominated, while the 2022 study focused especially on the mathematical area.

PISA originated in 2000, and since that year the study has been carried out periodically every 3 years. The only cycles that escape this rule are those of 2022 and 2025 (next to be carried out), since the pandemic and the subsequent cancellation of face-to-face classes prevented the surveys from being duly deployed in 2021. For this reason, the eighth cycle of PISA (2021 initially) was postponed to the following year, consequently affecting the date of the subsequent cycle. A noteworthy aspect is that since the first PISA cycle, space has been given to familiarity with technology.

This study implements a two-stage stratified sample design. The first stage were schools that had 15-year-old students to be surveyed, as the second sampling units were the students within sampled schools (see p. 104, OECD, 2024). In terms of participation levels in the latest studies, the cycle conducted in 2018 had 79 countries and economies, while the study conducted in 2022 was made up of 81 participants. Both cycles included OECD and non-OECD members.

A striking aspect of PISA is that from the beginning of the study they have contemplated a questionnaire on familiarity with technology. Due to rapid technological transformations, this questionnaire has undergone constant modifications. Nevertheless, digital self-efficacy appears as an inconsistent item in the different PISA cycles. In both 2000 and 2009 digital self-efficacy is absent in the questionnaires, however, in all other cycles it is present.

Each study has extensive documentation that is open access, which can be found on their respective web pages. There, the implemented questionnaires, technical reports, databases, among other files, can be viewed and downloaded. The openness of their data is of great value, since it allows them to be analyzed in order to generate knowledge in various fields, such as academia or public policy.

Similarly, it is extremely important to clarify that both TIMMS and PISA use characterization questionnaires as the only resource to address digital issues, which, ultimately, does not allow measuring the competencies or skills of respondents. Not so ICILS, which, in addition to covering characterization elements, deploys a standardized performance assessment test, which has the objective that digital literacy can be measured in a concrete way.

3.3.1 How this studies adress Digital self-efficacy?

ICILS, being focused on digital issues, does not have a specific section of the document dealing with it, but the document is framed in this topic. However, the evaluation framework of this study contains a chapter focused on contextual determinants, which is subdivided into the types of context that influence computer and information literacy, such as home context, school context, and individual context. The latter includes attitudinal and behavioral factors. Self-efficacy is placed in this framework.

To conceptualize self-efficacy, the study paraphrases Bandura’s (1993) definition, stating that self-efficacy is students’ confidence in their own ability to perform tasks in a specific area (Bandura, 1993, as cited in Julian Fraillon et al., 2013). In this way, it is made explicit that the questionnaire aims to measure the confidence expressed by students when performing ICT-related tasks. In this sense, ICILS employs the concept of ICT self-efficacy. In addition, the previous wave of the study is mentioned, emphasizing the identification of two dimensions of self-efficacy: basic and advanced. In this study, they will continue with this distinction. Finally, literature is presented that supports the idea that self-efficacy is a relevant variable for predicting achievement, in this case, in computational and information literacy (see p. 41, Fraillon et al., 2019).

PISA has an ICT assessment framework, which considers 3 dimensions: ICT uses; ICT access and ICT competencies of students. The latter specifies the most relevant competencies identified in existing assessment frameworks on digital literacy. It is worth mentioning that the framework provided by PISA goes in the direction of laying the foundations for, in the future, being able to integrate ICT literacy as a specific domain in its study. Therefore, the document defines ICT literacy as “the interest, attitude and ability of individuals to appropriately use digital technologies and communication tools to access, manage, integrate and evaluate information, construct new knowledge, and communicate with others in order to participate effectively in society” (Lenon et al., 2003, as cited in OECD, 2023). This definition is not original to PISA, but resorts to the conceptualization of Lenon et al. (2003).

ICT competencies include knowledge, understanding, attitudes, dispositions and skills. Self-efficacy is among the attitudes and dispositions towards ICT. In greater depth, the 5 main areas of ICT competencies are: accessing, managing and evaluating information and data (1); sharing information and communicating (2); transforming and creating digital content (3); individual and collaborative problem solving in digital contexts, and computational thinking (4); appropriate use of ICT (knowledge and skills related to safety, security and risk awareness) (5). In this context, it is made explicit that the measure of self-efficacy is the main assessment instrument for ICT competencies.

It is worth mentioning that PISA does not explicitly define “digital self-efficacy” but treats self-efficacy throughout the document in the framework of attitudes and dispositions towards ICT, so that a specific concept of self-efficacy is absent (see p. 277, OECD, 2023).

TIMMS is the study with the least extensive evaluation framework on digital issues, so it can be assumed that it is not a section that has been prioritized in the questionnaire. Instead, information on student performance in mathematics and science prevails.

The TIMMS cycle conducted in 2019 conceptualizes self-efficacy as “Student confidence using technology”, being considered in the section “Student attitudes toward learning”, where student attitudes toward mathematics and science are also highlighted (see pp. 72, Ina V.S. Mulllis & Michael O. Martin, 2019). Subsequently, in the 2023 cycle of the same study, the concept is changed to “digital self-efficacy”, which is framed in the section “Information technologies and digital services”. In it, two variables are specified: uses of digital services (1) and digital self-efficacy (2). In neither of the two cycles is there a previous contextualization that supports the presentation of these topics, nor a deep justification of why self-efficacy in digital issues is relevant. Nor is there a conceptualization of self-efficacy as such. Despite all these gaps, the study does employ a specific concept in both of its two cycles (see p. 59, Ina V.S. Mulllis et al., 2023).

Despite the lack of theoretical depth regarding digital self-efficacy, one of the most interesting features of TIMMS is that in addition to containing the digital self-efficacy battery, it has self-efficacy batteries for mathematics and science, which could open certain lines of research regarding this specific topic.

3.3.2 Measures of digital self-efficacy

Estudios ICILS.2013 ICILS.2018 PISA.2018 PISA.2022 TIMSS.2019 TIMSS.2023
Concepto específico ICT Self-efficacy ICT Self-efficacy Self-efficacy Self-efficacy Student confidence using technology Digital self-efficacy
Cantidad de dimensiones 2 2 1 1 1 1
Tipos de dimensiones Básica y avanzada General y especializada General General General General
Fraseo del ítem How well can you do each of these tasks on a computer? How well can you do each of these tasks on a computer? Thinking about your experience with digital media and digital devices: to what extent do you disagree or agree with the following statements? To what extent are you able to do the following tasks when using <digital devices>? How much do you agree with these statements? How much do you agree with these statements?
Ítems que componen la batería Básica General General General General General
Ítems que componen la batería 1) Search for and find a file on your computer 1) Edit digital photographs or other graphics images 1) I feel comfortable using digital devices that I am less familiar with 1) Search for and find relevant information online 1) I am good at using a computer 1)I can write and edit text on a computer, tablet or smartphone
Ítems que componen la batería 2) Edit digital photographs or other graphic images 2) Write or edit a text for a school assignment 2) If my friends and relatives want to buy new digital devices or applications, I can give them advice 2) Asses the quality of information you found online 2) I am good at typing 2)I can create school presentations using a computer, tablet or smartphone
Ítems que componen la batería 3) Create or edit documents (for example assignments for school) 3) Search for and find relevant information for a school project on Internet 3) I feel comfortable using digital devices at home 3) Share practical information with a group of students 3) I can use a touchscreen on a computer, tablet or smartphone 3)I can create, tables, charts and graphs using a computer, tablet or smartphone
Ítems que componen la batería 4) Search for and find information you need on the Internet 4) Create a multimedia presentation (with sound, pictures, or video) 4) When I come across problems digital devices, I think I can solve them 4) Collaborate with other students on a group assesment 4) It is easy for me to find information on the Internet 4)I can find information that I need online
Ítems que componen la batería 5) Create a multi-media presentation (with sound, pictures, or video) 5) Upload text, images, or video to an online profile 5) If my friends and relatives have a problem with digital devices, I can help them 5) Explain to other students how to share digital content online on or a school platform 5) I can look up the meanings of words on Internet 5)I can tell if a website is trustworthy
Ítems que componen la batería 6) Upload text, images or video to an online profile 6) Insert an image nto a document or message 6) If I need new softare, I install it by myself 6) Write or edit text for a school assignment 6) I can write sentences and paragraphs using a computer 6)I can easily do new things on computers, laptops or smartphones
Ítems que componen la batería 7) Install a program or [app] 7) I read information about digital devices to be independent 7) Collect and record data (e.g. using data loggers, <Microsoft Access>, <Google form>, spreadsheets) 7) I can edit on a computer 7)I can help my friends or family members with using their computers, laptops and smartphones
Ítems que componen la batería 8) Judge whether you can trust information you find on internet 8) I use digital devices as I want to use them 8) Create a multimedia presentation (with sound, pictures and video)
Ítems que componen la batería 9) If I have a problem with digital devices I start to solve it on my own 9) Create, update and mantain a webpage or blog
Ítems que componen la batería 10) If I need a new application, I choose it by myself 10) Change the setting of a device or app in order to protect my data and privacy
Ítems que componen la batería 11) Select the most efficient programme or app that allows me to carry out a specific task
Ítems que componen la batería 12) Create a computer program (e.g. in <Scratch>, <Python>, <Java>)
Ítems que componen la batería 13) Identify the source of an error in a software afeter considering a list of potential causes
Ítems que componen la batería Avanzada Especializada
Ítems que componen la batería 1) Use software to find and get rid of viruses 1)Create a database (e.g., using [Microsoft Access])
Ítems que componen la batería 2) Create a database (for example using [Microsoft Access]) 2)Build or edit a webpage
Ítems que componen la batería 3) Build or edit a webpage 3)Create a computer program, or [app] (e.g., in [Basic, Visual Basic])
Ítems que componen la batería 4) Change the settings on your computer to improve the way it operates or to fix problems 4)Set up a local area network of computers or other ICT
Ítems que componen la batería 5) Use a spreadsheet to do calculations, store data or plot a graph
6) Create a computer program or macro (for example in [Basic, Visual Basic])
7) Set up a computer network
Categorías de respuesta I know how to do this (1); I could work out how to do this (2); I do not think I could do this (3) I know how to do this (1); I could work out how to do this (2); I do not think I could do this (3) Strongly disagree (1); Disagree (2); Agree (3); Strongly agree (4) I cannot do this (1), I struggle to do this on my own (2), I can do with a bit of effort (3), i can easily do this (4), I don´t know what this is (5) Agree a lot (1); Agree a little (2); Disagree a little (3); Disagree a lot (4) Agree a lot (1); Agree a little (2); Disagree a little (3); Disagree a lot (4)
Diferencias entre ciclo Dimensiones de autoeficacia Enunciados de las baterías Dimensiones de autoeficacia Enunciados de las baterías Fraseo del ítem Enunciados de las baterías Categorías de respuesta Fraseo del ítem Enunciados de las baterías Categorías de respuesta Concepto de autoeficacia Enunciados de las baterías Concepto de autoeficacia Enunciados de las baterías

Table 3 shows the digital self-efficacy batteries belonging to the last two cycles of each study mentioned above, including some of their characteristics such as item phrasing and/or response categories. This was done in order to illustrate the differences that exist between the different cycles of a study, as well as the differences between studies.

Firstly, in ICILS, it can be observed that the specific concept with which they deal with digital self-efficacy has been maintained, using “ICT Self-efficacy” in its two cycles. Likewise, both the item phrasing and the response categories that were proposed in the first cycle of the study remained unchanged in the second cycle.

On the contrary, the change in the conceptualization of the type of ICT self-efficacy dimensions stands out, since, while in the 2013 cycle they were defined as “basic and advanced”, in the 2018 cycle this was modified to “general and specialized”. Now, the most significant change is in the items that make up the ICT Self-efficacy batteries in the two cycles. In the basic-general dimension, for the 2013 cycle there are 6 items, finding a tendency towards digital editing and searching tasks, while in the 2018 cycle there are 8 items in total, including topics of program installation and information evaluation. On the other hand, in the advanced-specialized dimension there is a decrease of items when comparing the first and second cycle. In 2013 this battery was made up of 7 items, and in 2018 only 4, leaving aside tasks such as removing viruses from a computer or using spreadsheets.

The digital-themed self-efficacy batteries in PISA change almost all of their characteristics from one cycle to the next, only keeping the understanding of self-efficacy on a single dimension. First, the item phrasing in the 2018 cycle attends to the extent to which the respondent agrees or disagrees with the statements to be presented. In contrast, in the 2022 cycle it refers to the extent to which the respondent is able to perform the tasks that will be presented. These changes in the phrasing of the item also have implications for the response categories of the cycles, since they point to different questions. For the 2018 cycle the responses are based on a scale ranging from “strongly disagree” to “strongly agree,” while the response categories for the last cycle range from “I cannot do it” to “I can do it easily.”

The items of the PISA batteries also underwent several changes. First, the 2022 cycle has 3 more items compared to the 2018 cycle. To elaborate on this, the items of the first cycle of this study are mainly made up of statements that allude to the feeling of comfort with the use of digital services, as well as tasks that point to autonomy in the digital sphere. In contrast, the 2022 items focus on task performance, including skills ranging from practical knowledge to critical evaluation. In addition, it can be generally noted that the battery is constructed in such a way that the first items point to tasks that do not demand great complexity, while the last items require much more in-depth knowledge and skills in the digital domain.

The measurement of digital self-efficacy in TIMSS has maintained certain characteristics, specifically the consideration of a single self-efficacy dimension, item phrasing and subsequent response categories. The question that supports this battery expresses the degree to which one agrees with the statements, so the response categories range from “strongly agree” to “strongly disagree”.

Now, the concept with which self-efficacy in digital subjects is treated is different between the two TIMMS cycles. For the 2019 cycle the concept “Student confidence using techonology” was used, which in the 2023 cycle would change to “Digital Self-efficacy”.

Another aspect that underwent modifications was the construction of the digital self-efficacy batteries. In the previous cycle, the battery consisted of 7 items, which focused on practical tasks of a low level of complexity; among them, “being good at typing” or “being able to use a touchscreen”. In TIMSS 2023, the same number of items are observed, but they address tasks that require greater knowledge in the field, such as creating presentations or graphics, or feeling able to help others in the use of digital services.

Between ICILS, PISA and TIMSS, major differences can be noted in terms of the characteristics of their self-efficacy batteries in the digital domain. First, TIMSS is the only study that has modified the conceptualization of self-efficacy in relation to technology. However, ICILS stands out for comprising two dimensions of digital self-efficacy, whereas the other studies only take into account one generalized dimension. On the other hand, the item phrasings and the response categories derived from them differ between each study. The closest studies in these measurement characteristics are PISA 2018 with the two TIMMS cycles when measuring the degree of agreement.

The greatest differences are found in the items that make up the digital self-efficacy batteries. Each study has undergone transformations in the items that make up the batteries, some including a greater number of statements, or adding tasks of greater complexity and omitting aspects that were measured in their respective previous cycles.

This section conducted a comparative analysis of the construction of the digital self-efficacy batteries, observing the items that constitute such batteries in the different cycles of the different studies, as well as the differences presented intra-cycle and intra-study. The next section seeks to deepen the analysis of how self-efficacy in digital issues has been measured in the studies presented in the document, through an evaluation based on the conceptual framework provided by Bandura, which was presented in the first part of this working paper.

3.3.3 A comparison of operazionalization strategies

As can be read, these three ILSA studies have different theoretical approaches to Digital Self-efficacy. Because of their limitations, one of them will likely focus on some dimensions of the concept of self-efficacy and another on other elements. The following section is a comparative analysis of the orientations and definitions of each study in relation to the conceptual discussions of Self-efficacy.

ICILS presents a task orientation on self-efficacy concept in both cycles it presents. The questions ask if students can or cannot do concrete activities with technologies, leaving out self-regulatory elements of the concept. In the items there’s not query on levels of confidence or comfortability in the doing-task process. In that sense, ICILS focus on the magnitude of task achievement over strength. The concern is around the grades or levels of masterization on digital devices handling in isolation. This same approach explains why is the unique ILSA which divides a priori the Self-efficacy batteries in two: the first for a minus level of complexity in the masterization and the second for a more advance state of that process.

Both batteries of this study in 2013 and 2018 cycles are named as ‘ICT Self-efficacy’, which is consistent with the development of the debate on digital Self-efficacy (Ulfert-Blank & Schmidt, 2022). 2010’s were the epoche were Computer and Internet Self-efficacy were synthetized with ICT self-efficacy. It is noticeable that ICILS’ scales not only measure technical or informative tasks (as do the Computer self-efficacy approach), such as application development or file manipulation, but also others of evaluative nature and digital content creation. Likewise, it’s important to note that the scale is computer-device oriented, most of the tasks listed cannot be done in smartphones or other digital systems less powerful to process information.

ICILS’s statement again demonstrates its focus on digital literacy over a comprehensive understanding of digital competences. The question looks at “How well” the task is done, seeking to orient the concept to a sense of neatness or perfection of accomplishment, over levels of confidence in the process. Regarding the phrasing of the responses, ICILS has a battery consistent with the recommendations of the specialized literature on self-efficacy measurements (Bandura, 2006; Pajares & Schunk, 2002; Williams & Rhodes, 2016), in that it avoids can-do statements, and opts for a strategy that divides individuals who declare they know from those who declare they do not know but believe they could learn, as well as those who declare they are unable to do so. However, the relatively categorical nature with which the battery responses are composed tends to limit the analysis of individuals’ nuances and degrees of self-efficacy, and also have the problem of asking about future capabilities and no actual ones (Bandura, 2006).

PISA presents an inconsistent approach to measuring self-efficacy. In the 2018 version, both the statement and the items refer to the self-regulatory dimension of the concept, specifically with coping self-efficacy. The statement puts the individual in a context when asking for his or her own experience with digital devices. Then, some items ask explicitly for levels of comfort and others encourage the individuals to think about their own interpersonal relations (friends, family, etc.) before they judge their own level of self-efficacy. 2023 version makes a qualitative leap. The scale opts for a battery more similar to ICILS approach, where the statement assumes individual isolation, and the items identify concrete digitally mediated tasks to be solved.

Although PISA does not add to its documentation a specific terminology to refer to digital self-efficacy, it is clear that in 2018 they presented a rather fuzzy concept approaching self-efficacy for the general use of digital devices, which does not specify specific tasks applicable in different systems. In that sense, it is difficult to encapsulate this proposal within the debate identified by Ulfert-Blank & Schmidt (2022). In the case of 2022, the items presented are in tune with the most recent Digital self-efficacy concept, as identify not only digital literacy aspects but digital creation, security and problem-solving too, independently of digital systems. If the analysis enters in details, even if the approach connect with actual standards and suggestions to measure the concept, in any case does not take into account the complexity of security and problem-solving dimensions, leaving only one minimal item for both competences.

With respect to the response wording, PISA proposals let researchers delve into numerical levels of self-efficacy, as they present more than three alternatives, with a clearly ordinal disposition. In both cycles, the responses determine grades of agreement in the achievement of the tasks deployed using the strategy suggested by Bandura (2006) and Williams & Rhodes (2016) of not dichotomizing between being able or not being able to do the task, but rather establishing levels of security to be able to achieve it. Altough, in 2018 the categories have an bipolar scale which is not aligned with Bandura (2006) standard of unipolar scales recommendations.

TIMMS declares explicitly the adoption of the ‘Digital Self-efficacy’ terminology, but this does not guarantee that it complies with the multidimensional and heterogeneous standards Ulfert-Blank & Schmidt (2022) suggests. The 2019 cycle have a very basic and minimalistic approach to digital efficacy expectancy. The tasks measured are excessively general and simple, as ‘use a touchscreen’ or ‘edit on a computer’. Although the items phrasing shows that the scale has a good balance between task and self-regulatory approach to self-efficacy, results to be are so abstract that they lose their specificity. The 2023 cycle presents major levels of complexity, but ignores tasks out of informational, evaluative, or creative skills with technologies, such as problem-solving or security. The battery is short and does not allow for a deeper dive into the concept of digital self-efficacy as discussed by Ulfert-Blank & Schmidt (2022).

As PISA, TIMMS in both cycles subscribe to a bipolar level of agreement on task achievement strategy which is not aligned with Bandura (2006) assessments. The measure let identify grades of Self-efficacy, but not distinguishing whether individual can/cannot achievement or not (For this, a scale from 0 to a maxium level have to be used).

In summary, it can be pointed out that ICILS emphasizes to a greater extent the ICT operational and technical aspects of the technologies, proposing two measures that linearly increase the degrees of complexity of the tasks, and excluding the attitudinal aspect in this process. While PISA details to a greater extent the mechanisms of self-regulation that accompany the concept of self-efficacy and Digital Competence, although it presents inconsistencies between its cycles. And finally, TIMMS has a minimalist strategy that does not give enough emphasis to digital self-efficacy to deepen the debates surrounding the concept.

3.3.4 Suggestions from Team NUDOS

Una vez que discutamos en profundidad estos resultados, sería pertinente armar una propuesta o sugerencias para los tres estudios ILSA que vaya acorde a sus enfoques… (Este trabajo queda en pendiente pues requiere de juntarnos y acumular ideas en conjunto).

References

Bandura, A. (2006). Guide for constructing Self-efficacy scales. In Self-Efficacy Beliefs of Adolescents. IAP.
Cabero Almenara, J., & Llorente Cejudo, M. C. (2008). La alfabetización digital de los alumnos: competencias digitales para el siglo XXI. Revista portuguesa de pedagogía, 42(2), 7–28. https://doi.org/10.14195/1647-8614_42-2_1
Echeverría Ezponda, J. (2010). Las TIC en las relaciones entre Europa y Latinoamérica. Revista internacional de los estudios vascos, 55(1), 61–94.
Fraillon, J., Ainley, J., Schulz, W., Duckworth, D., & Friedman, T. (2019). Contextual framework. In IEA International Computer and Information Literacy Study 2018 Assessment Framework (pp. 33–42). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-19389-8_4
Husén, T. (1979). An International Research Venture in Retrospect: The IEA Surveys. Comparative Education Review, 23(3), 371–385. https://doi.org/10.1086/446067
Ina V.S. Mulllis, & Michael O. Martin. (2019). TIMSS 2019 Context Questionnaire Framework. In TIMSS 2019 Assessment Frameworks. TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College and International Association for the Evaluation of Educational Achievement (IEA).
Ina V.S. Mulllis, Michael O. Martin, & Matthias von Davier. (2023). TIMSS 2023 Context Questionnaire Framework. In TIMSS 2023 Assessment Frameworks. TIMSS & PIRLS International Study Center, Lynch School of Education and Human Development, Boston College and International Association for the Evaluation of Educational Achievement (IEA).
Johansson, S. (2016). International large-scale assessments: What uses, what consequences? Educational Research, 58(2), 139–148.
Julian Fraillon, John Ainley, Wolfram Schulz, Tim Friedman, & Daniel Duckworth. (2020). Sample design and implementation. In IEA International Computer and Information Literacy Study 2018 Technical Report. International Association for the Evaluation of Educational Achievement (IEA).
Julian Fraillon, Wolfram Schulz, & John Ainley. (2013). IEA International Computer and Information Literacy Study 2013 Assessment Framework. International Association for the Evaluation of Educational Achievement (IEA).
OECD. (2023). PISA 2022 ICT Framework. In PISA 2022 Assessment and Analytical Framework (pp. 238–285). Paris: OECD Publishing.
OECD. (2024). PISA 2022 Technical Report. OECD Publishing.
Pajares, F., & Schunk, D. H. (2002). Chapter 1 - Self and Self-Belief in Psychology and Education: A Historical Perspective. In J. Aronson (Ed.), Improving Academic Achievement (pp. 3–21). San Diego: Academic Press. https://doi.org/10.1016/B978-012064455-1/50004-X
Siegel, P., & Foy, P. (2024). TIMSS sample design. In TIMSS 2023 Technical Report (Methods and Procedures) (pp. 3.1–3.30). Boston College, TIMSS & PIRLS International Study Center.
Trucco, D., & Sunkel, G. (2010). Nuevas tecnologías de la información y la comunicación para la educación en America Latina: riesgos y oportunidades. CEPAL.
Ulfert-Blank, A.-S., & Schmidt, I. (2022). Assessing digital self-efficacy: Review and scale development. Computers & Education, 191, 104626. https://doi.org/10.1016/j.compedu.2022.104626
Williams, D., & Rhodes, R. E. (2016). The Confounded Self-Efficacy Construct: Review, Conceptual Analysis, and Recommendations for Future Research. Health Psychology Review, 10(2), 113–128. https://doi.org/10.1080/17437199.2014.941998

  1. Hereinafter, ILSAs↩︎

  2. Hereinafter, IEA↩︎

  3. Hereinafter, OECD↩︎

  4. Hereinafter, ICTs↩︎

  5. Hereinafter, ICILS↩︎

  6. Hereinafter, TIMSS↩︎

  7. Hereinafter, PISA↩︎