Friend or Foe? The Ethical Debate Around ChatGPT in Schools

03/26/2024 12:00 AM e Admin e Ai tools


The Ethical Debate Around ChatGPT in Schools

In recent years, the integration of artificial intelligence (AI) tools like ChatGPT in educational settings has sparked a heated and intense discussion among educators, policymakers, researchers, and ethicists from around the world. This ongoing debate revolves around the potential benefits and drawbacks of AI implementation, as well as the ethical implications and long-term effects it may have on students, educators, and society as a whole. Supporters argue that AI tools can enhance learning experiences, personalize education, and provide valuable insights for educators. They believe that with AI, educational institutions have the opportunity to revolutionize the way knowledge is imparted and acquired. However, critics express concerns about the potential misuse of AI, such as data privacy breaches, algorithm biases, and the loss of human connection in the learning process. They argue that over-reliance on AI may hinder critical thinking skills, stifle creativity, and exacerbate existing educational inequalities. In this complex and multifaceted discussion, finding a balance between the advantages and risks of AI integration is crucial. It requires careful consideration of ethical principles, comprehensive policies, and ongoing research to ensure that AI tools are designed and implemented in a way that promotes inclusive and beneficial educational practices. As the advancements in AI technology continue to shape the educational landscape, it becomes imperative for all stakeholders to engage in collaborative dialogue, evidence-based decision-making, and continuous evaluation to navigate the implications of AI in education effectively. By doing so, we can harness the potential of AI while upholding essential educational values, promoting equity, and preparing students for the challenges and opportunities of the future.

1. Introduction

Given that the intended role of ChatGPT as a teaching assistant necessitates interaction with students, we are essentially concerned with its safety as a tool for education. Depending on the age of the students and the specific nature of their difficulties, the learning environment can be considered a vulnerable population. Although it is possible to avoid using a model that has been trained on inappropriate text, it is incredibly difficult to prevent it from generating such text itself. Even GPT-2, which is known to be less biased and controversial than more recent models, has been shown to exhibit undesirable behavior given the right prompt. An immediate and pressing concern is that the model might generate text that is unsuitable for children. This could be outright harmful material, or it could be something like answering a creative writing prompt with a violent story. Finally, a model which exhibits sexist, racist, or otherwise inappropriate language might hinder the progress of a student who is a member of a group which is targeted by said language.

This paper discusses the ethical implications of using OpenAI's ChatGPT as a teaching assistant in schools. This issue first came to light when the researchers who wanted to collaborate with an AI teaching assistant on a language learning task were refused exemption from ethics review, on the grounds that the teaching assistant might hear something it shouldn't. There are various other projects where a GPT-2 or similar model might be used as an auxiliary teaching tool, particularly for students who are struggling with written language. For the foreseeable future, these projects that modify the model and don't involve interaction with vulnerable populations would still fall within the purview of current ChatGPT use. We will now give a more in-depth explanation of what is meant by ChatGPT and why, in spite of its potential, it presents a myriad of ethical concerns.

1.1 Background of ChatGPT in education

Education is just one of the countless fields that ChatGPT could be utilized in. If incorporated into education, ChatGPT could be used to teach students in a one-on-one setting. For example, it could be used to teach language, conversation, and speaking skills in foreign language classes. A teacher could have a student "speak" to the GPT in the foreign language and have the GPT respond to the student, creating a conversation simulation where the student can apply what they have learned in a realistic setting. Another example could be in an English class, where students could use the GPT for writing exercises. A student could write an essay or a piece of creative writing and have the GPT provide "peer" feedback on the writing. These are just a couple of examples of how GPT could be used in teaching; the possibilities are virtually endless.

ChatGPT is a language generation model that is designed to imitate human interaction. It is machine learning that has been trained to respond to human text inputs. This creates a conversation that is specifically catered to the user's input. The result is a chat simulator that can engage in realistic and coherent conversation on various topics. Research and development of GPT-3 was completed in 2020 and it has received immense interest and enthusiasm from consumers.

1.2 Importance of addressing ethical concerns

As Hammond (2013) discusses ethical analysis in professional learning, it is important to not only consider the intended consequences, but also the possible unintended or unanticipated effects a tool or practice may have. He states, "Professional responsibility is not a matter of seeking to do 'good' as opposed to 'bad', but to search for the better action while taking into consideration the possible and probable outcomes". In taking such a responsible and reflective approach, educators can use ChatGPT as a learning experience to engage in an ongoing dialogue with students about the complexities of ethical decision making in a digital world. This approach also offers potential to contribute valuable insights to the broader educational community about the benefits and pitfalls of using AI-based tools with students.

ChatGPT, like many technological platforms, has the potential to transform how students engage in learning within various educational settings. However, given the important role of education in shaping the knowledge, skills, and dispositions of future citizens, there may be ethical concerns associated with using this tool in schools. An argument has been made that researchers, schools, and educators should not avoid exploring the use of such tools out of fear of making a mistake. Rather, this should be done cautiously, with intentional consideration of potential consequences, and clear justification of how using this tool is beneficial in promoting student learning.

2. Benefits of ChatGPT in Education

Students with disabilities may find it easier to engage in classroom activities with chatbots. Chatbots can also provide personalized learning experiences to students. ChatGPT can offer students who need extra help, a tool they can use to learn at their own pace. For example, if a student is falling behind, they can use an AI chatbot to work through problems until they understand the concept. This is something that can't usually be done with human teachers, as human teachers need to split their time between many different students. Having more 1 on 1 time with a teacher is also an option that can be offered with a chatbot. If a chatbot is used by an entire class, the availability of human teachers to offer 1 on 1 time on topics using the chatbot will increase. Language learning can be made easier with chatbots as they can provide on the spot translations. As AI chatbots become more advanced, it is expected that they will become more useful for language students.

2.1 Enhancing student engagement and participation

It is the objective of any progressive educator to seek out better means and methods of engaging students in their learning. The educator who is aware of the behavior of students in self-directed learning increasingly realizes its inadequacy. There is both empirical observation and research evidence to show that the ability of students to profit from further study varies in direct ratio with the extent to which it is accompanied by the guidance and tutorial behavior of the teacher. We have also learned that the student's response to learning academic knowledge or a new skill is influenced by the authority of his own motivations and that intrinsic motivators bring more assured and satisfying results than extrinsic. Chatbots may offer potential for both improved guidance and increased motivation in student learning. A well-designed chatbot may give the student the impression of a personal tutor being always available to answer questions and give help. It does this by providing dialogue and feedback in response to the student's actions which are similar in nature to that which a human tutor would provide. The inexorable growth in availability of chatbots and their long-term benefits will be an empirical question, but there is a prima facie case that they will enhance student engagement and thus the efficacy of student learning.

2.2 Providing personalized learning experiences

The program can be scripted to recognize key words and suggest various methods of problem solving to the student. Over time, the tutor can learn more about the student's strengths and weaknesses and adjust the level of difficulty of problems to best suit the student. This can be done today with non-AI tutoring programs, but the ability of AI to recognize natural language makes it a far more powerful tool.

Student: I'm having trouble understanding this math problem I have to do. Tutor: Sure, I can help you with that. Show me the problem.

Using AI in educational chat has the benefit of being able to personalize learning experiences. The typical classroom setup has one teacher and many students. Often, the teacher will teach to the middle, and some students will not receive the attention they need. This can be frustrating for students who may be falling behind on the material. For example, studies have shown that up to 40% of Australian mathematics students are not working at their optimum level as the work is either too easy or too hard for them. GPTS can be programmed to cater for an individual student's level and adapt instantly to their changing needs. This is a scripted example of a basic one-on-one tutoring session with a student.

2.3 Assisting with language learning and translation

As well as the benefits for ESL students, ChatGPT has the potential to assist in language learning for students in regular language programs, both in providing explanations and instructions and in improving students' comprehension of written or spoken language. According to Poekert (2007), language learning can be improved through the use of strategies in which students are required to construct responses to questions or problems, such as the "exploratory" dialogue which has been employed to describe certain patterns of tutorial talk. This is in contrast to the more commonly used "interactive" and "instructional" dialogues in which the teacher is providing a lecture or direct instruction. By using an open-ended prompt or those that require students to seek solutions to problems, teachers can require GPT to generate responses that are more closely aligned with the "exploratory" dialogue, effectively modeling the type of speaking and writing which students are trying to learn. This can be especially useful for language students or students of language, when the teacher is pre-empting a function or translation exercise. The use of GPT can generate a model which can provide a better understanding for students learning to translate with more or less success by comparing the model's responses with their own.

3. Drawbacks of ChatGPT in Education

This ties directly into the previously mentioned concern of chatbot's deployment into commercial systems. With the likelihood of future use in systems such as Turnitin, misuse may become a large-scale problem, with platform banning being the only form of resolution. A situation such as this would render the chatbot useless to the honest student seeking legitimate help with no intention of cheating. And with the chatbot's successful assistance to students up to the current day, as shown throughout the paper, a future where the chatbot is only available to students as a tool for cheating is concerning.

In a study conducted with students' perspectives on AI systems in education, a student admitted they would use the program as a way to "bullshit [their] way through assessments". Results from an experiment presented in the same paper show a small percentage of students would misuse the program, with a drastically higher rate in likelihood for misuse in the future. This possibility of misuse is an issue that may never gain a concrete solution. With continuous development in AI and no limitations on its learning algorithms, protecting chatbot limitations may be an impossible task.

ChatGPT's capabilities in automatic text generation pose an increased risk of academic dishonesty and cheating for students. While solutions for this weren't particularly clear in my research, the chances that students will use the program to generate sentences, paragraphs, or complete essays is a threat to the assessment of a student's academic ability. It is obvious why a student would take this option as opposed to doing the work themselves, as the time and effort taken in comparison to traditional methods is minuscule.

3.1 Potential for academic dishonesty and cheating

Potential for academic dishonesty and cheating can be facilitated by students utilizing ChatGPT to bypass doing their own work. Thus, the capability to use an OpenAI GPT-3 or similar conversational program to generate an essay or assignment simply by providing the topic requirements to the AI presents a major source for potential academic dishonesty. Students caught submitting work generated by an AI as their own marks themselves for not only receiving no grade in the assignment, but also potentially review by an academic integrity board and academic probation. Yet the more sinister impact may be by getting away with it; if teachers are unable to detect AI-generated text, this might inadvertently elevate dishonest students to higher grades, awards, or even scholarships and acceptances they would not attain honestly. This will, in turn, give undue rewards and promotions to these students in their future endeavors, heisting opportunities from others who honestly worked towards them. In worst-case scenarios, this can be a corrupting influence leading to students who would later attempt to use AI to write a test for credentials required for work in law, medicine, or engineering where mistakes can lead to human life. This could have an impact on public trust and social structure in the competency and qualifications of professionals if it becomes known that AI passed testing was used.

3.2 Lack of critical thinking and creativity development

The programmed responses in AIML essentially follow this same logic by storing patterns as they are matched and using that memory to generate responses. This pattern match and store method allows the AI to 'learn' new patterns of information, but the information is often out of context as the bot has no actual means of understanding, and it may not be productive in producing the desired response. This is akin to a student learning isolated facts with no actual understanding or ability to apply the information.

To be productive, easily remembered information that a machine learning AI can seek to replicate, and this is certainly no fault of the AI. Unfortunately, the pharmaceutical model is widely adopted by educational instruction. Teachers scrambling to adhere to ever-tightening time constraints find themselves forced to prepare students for standardized tests where selectivity of material ensures that only bits of information are relevant. Students memorize these facts only to regurgitate them onto a test, then promptly rid themselves of the now useless information.

In a study where college students engaged with the chatbot, many became frustrated with the AI's lack of logical and coherent conversation, eventually devolving into giving one-word responses to the chatbot's questions to progress the conversation. And while chatbot enthusiasts may argue that this is indicative of a more advanced AI that is difficult to discern from human behavior, it is not behavior conducive to critical thinking and creativity in a learning environment.

It would be expected that a machine learning chatbot, capable of producing human-like text and supporting conversation, would be able to foster critical thinking and creativity in students. However, such an activity is not particularly high on ChatGPT's itinerary nor is it the specific purpose of the program. It should be expected that students will be capable of creating more thoughtful and argumentative responses to generate an adequate, human-like counter response. Unfortunately, due to the nature of the program's predictive text generation, such responses are often found to be much easier to attain through simply using the bot itself.

3.3 Ethical concerns regarding data privacy and security

Although the application of using OpenAI's ChatGPT can be a tool in enhancing student learning, it comes with a trade-off in data privacy that would concern all parents. An AI model designed to suggest intelligent responses in natural language conversations presents dangers in how data is used and retained. If the application is used in free form chat, students will inevitably discuss personal matters with the chatbot as the conversation would be no different than one between friends. This data will be recorded and processed to enhance the model in understanding responses in natural language conversation, as well as to provide context awareness to have human-like conversations. Although OpenAI states that they hold high standards in keeping data confidential and that data only being used for the purpose of enhancing the language model, many have doubts in the security of how the data is stored and assurance in complete confidentiality. In a hypothetical scenario, if an educational institute has access to student logs with conversational data from ChatGPT, it leaves potential for student dataplots to be analyzed by staff without student consent. In Australia, there is the issue of consent when using the chatbot for students under 18 years of age. The guidelines for using OpenAI's ChatGPT in an educational context recommend that parents should sign a permission form due to the fact that the chatbot will be used in an unsupervised environment, thus revealing the nature in which students will react naturally with the tool. This directly exposes data with young people, which entrenches the need for the chatbot to maintain confidentiality and assure security in data retention. This presents a significant barrier in legalities to how the chatbot tool can be used, as the requirement of gaining parent permission forms becomes laborious to integrate an AI tool as a standard practice in education.

4. Strategies for Promoting Ethical Use

To mitigate the risks associated with unsupervised student use of language AI, teachers and school leaders can promote ethical use through policy and education. Complete prohibition of language AI will likely be ineffective, given its wide accessibility and considerable educational utility. Policies need to accommodate safe and ethical use of AI, providing clear guidelines and instruction. Students need to fully understand what is and is not acceptable use of GPT-3 and similar AI in the context of specific activities or assignments versus general inquiry and curiosity. Guidelines should discourage or restrict use of AI for producing completed work or for personal use and prohibit use for activities like chatbot dating. Get this in AutoTrax, from one of Carol's documents. It might not hurt to get a plagiarism feature. She might want to customize this later, based on discussion with teachers. The ability for teachers to monitor or moderate student use of AI in some fashion is also an idea that has been raised; however, this brings us into costly and complex technical territory and potentially stifles the genuine inquiry and learning that we wish to foster. Ideally, teachers can best monitor and moderate student use of AI through being well informed about it and maintaining open dialogue with students.

4.1 Implementing clear guidelines and policies

The first steps to educating students about responsible AI use involve laying out clear guidelines and policies. A great place to start would be to abide by Asimov's Three Laws of Robotics. These laws were mentioned in section 2.1 and are still relevant to ChatGPT today. They state that a robot may not injure a human or through inaction allow a human to come to harm, a robot must obey the orders given to it by a human except where it would conflict with the first law and finally, that a robot must protect its own existence as long as such protection does not conflict with the first or second laws. While these guidelines were made with fiction in mind, applying them to ChatGPT forces users to think of ChatGPT as a form of AI and as a result, they are mindful of potential harm that could result from using it. A more modern translation of these laws would be to simply consider the consequence of user inputs when prompting ChatGPT and refrain from asking anything that would offend or harm the AI. An addition to this would be guidelines in the form of flow charts or decision trees as they will provide a visual aid to students using ChatGPT to show what kinds of inputs are ethically unacceptable. Specifically stating what is allowed and what is not allowed will make it much easier for educators to moderate ChatGPT use.

4.2 Educating students about responsible AI use

AI competencies are increasingly an essential part of everyday life and an important part of future job markets. Therefore, it is important for schools to provide opportunities for students to develop applied AI skills. However, co-design workshop participants recognized the need to balance teaching these important technical skills with an understanding of where and when AI is appropriate, but without falling into the trap of positioning some groups at 'users' and some at 'developers'. An understanding of responsible AI use is essential for all citizens. Work done on trying to develop a framework for responsible AI was not judged to be directly transferable to school environments. Therefore, we propose an upstream preventative approach which starts by getting students to critically evaluate whether using AI in this context is a good idea in the first place. This can then lead to discussions around what kinds of AI are appropriate for which contexts. This is an area where work on case studies would be highly beneficial.

4.3 Encouraging collaboration and peer learning

Moreover, peer learning and collaboration build better understanding and deeper learning. This can be explained through Vygotsky's zone of proximal development (ZPD) where he stated that learning and cognitive development takes place through social interactions and from collaborations with others. His sociocultural theory suggests that children and people react to failure or other people's instruction in different culture. Activities performed can change the abilities such as a guidance taken for certain activities has potential to change the child's potential (Vygotsky, 1978). This theory emphasis on the importance of guidance in children's learning experiences and a need of connecting learning activities to socially and culturally established communities. ZPD is defined as the level at which a child can nearly but not fully accomplish a task on his or her own. This is a concept of cognitive level which is a distance between the actual developmental level and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peer (Vygotsky, 1978). This theory stressed that children learn and engaging in collaborative offers children potential to reach beyond their capabilities doing activities on their own.

Effective learning method in the technology-driven era is by connecting learning methods. Love (2003) argued that the most effective learning process takes place through social interactions. Consequently, an effective learning occurs when educators try to link students with their social contexts and involve them in activities that encourage social discourse. His findings are supported by Murray (2000) who hold the view that learning is situated within communities of practice and can be seen as a process in which a learner becomes a legitimate participant in a particular community through engagement in activities and the development of relationships with other members. Both perspectives accentuated learning which is achieved through process of interactions, engaging in activity and getting involved with the community.

5. Addressing Academic Integrity Concerns

Despite the widespread enthusiasm about the potential of AI-driven tools in education, we should also be mindful of potential negative consequences. One such concern is the risk of students misusing the technology to seek unfair advantage. ChatGPT could be used for plagiarism, either by directly inputting the work of others or by using it to generate written work that is not sufficiently original. While the former is easier to detect using standard anti-plagiarism tools, the latter could present a real headache for educators. Students may also use the tool to cheat in assessments by trying to input questions into it in order to receive an answer. Given that much of the assessment work that ChatGPT may be used on is written, it may be difficult to prove in cases of suspected misuse. An extreme form of this would be students trying to manipulate teachers into using the tool to generate work in cases where it would not be appropriate. Plotting a preventative strategy against such issues is likely to be more effective than retrospectively trying to detect misuse. One possibility may be to have ChatGPT only enabled on a school network and disallow its use on personal devices. This would make it easier to monitor student activity and using network logs it may even be able to be detected if a student has tried to access the tool outside of when it is supposed to be used. Another method of preventing misuse would be careful selection of when and where the tool is implemented. While it could be a valuable resource in some cases, there may be assessments or pieces of work where it is deemed too high risk to allow students to access an AI language model. In such cases, alternative evaluation methods will be required.

5.1 Monitoring and detecting misuse of ChatGPT

Though it may seem heavy-handed to use GPT for assignment risk behavior, it's a necessary preventive step for maintaining academic integrity. With each detection report, it would give insight into how the assignment was too easily answerable or using common phrasing to other internet resources, which is a sign of poorly designed assignments. This would drive tutors and educators to apply the later discussed alternative evaluation methods for which GPT, with its continued AI model development, should not have answer resources to simulate for the student.

To address this, tutors could monitor the student conversations with chat logs and probe students to explain their thought process when answering an assignment through verbal discussions or presentations. However, these are all reactive remediation steps that could be inadequate or too late in preventing assignment completion by using GPT. An automatic detection system should be developed for specific assignment question answering behavior and log comparisons between submission work and chat dialogue. If there are close matches, the other chat participant involved in the "consultation" will be prompted for an explanation on how the assignment was answered, and a detection report will be generated for the tutor's further action.

Education institutions are more structured with clear learning objectives and outcomes assessment. The tutors and students are given the flexibility in problem solving, exploring concepts and learning application to real-world scenarios. The availability of ChatGPT with its natural language conversation provides the perfect environment for discussions and exploratory writing for students. However, this could give rise to misuse by providing answers to assignments or homework questions, and some clever students might use the language model to complete their assignments by simply rephrasing GPT's answers.

5.2 Promoting academic honesty through assessments

5.2 Promoting academic honesty through assessments One way of addressing the issue of unbalanced assistance which is received from the GPT between different students with access to computers might be to restrict its use and availability during the pre-examination revision period, invoking some kind of 'closed session' similar to that used during net-based video games or the administration of a password-protected online test. Examinations are a key concern; whilst it is not impossible to ban writing things down or looking things up, the current trend towards more constructive and vocationally relevant testing involving open book assessment and the use of a 'live' or simulated work environment will render AI assistance increasingly difficult to distinguish from acceptable access to reference materials. Step by step model responses such as those used for teaching English as a foreign language could feasibly be subject to matching criteria similar to that used for plagiarism detection on essay text. An OCR version of the GPT would assist in making a comparison between the creative writing of the student and the stimulus texts, though it could not be truly verified that the student has not copied the AI. (Sean Hogg - AI in Education - 18th February 2008)

When students who use AI language models write a summary or response to a text, they may 'feed in' the original text in chunks, a sentence at a time, until what is produced is identical to what they started with. Short of measures which format input and output more recognizably, this might be unpreventable. Although students who get a CA mark on questions requiring longer responses may be able to reflect on what they have written, delete it, and have another 'go' a few days later, teachers have this one chance in real examinations, and it is debatable as to whether it is practical to ask Examiners to use the GPT themselves in order to establish the degree of similarity between an individual response and the 'model' provided - although this would be a very constructive method of formative assessment of their own. (Sean Hogg - AI in Education - 18th February 2008)

5.3 Developing alternative evaluation methods

In designing alternative evaluation methods, the aim is to create assessments and evaluation exercises that will rely less on students producing written material, and more on students demonstrating their skills and knowledge in live situations. Examples of this might include computer programming students developing code in a controlled online environment, while a language student might participate in a simulated conversation with an AI chatbot. In both cases, the assessment exercise can be automatically compared with the ChatGPT output, using the same methods that would be employed to detect direct misuse of the model, though in these cases, the ethical concerns of comparing student work with an AI are less alarming. For subjects that are heavily reliant on written assessment, the possibility of generating an AI model to mark the essays rather than students is an interesting one, though it may not be a first choice for many educators. Research has shown, however, that students generally trust the ability of AI to mark fairly. An experiment in which students were asked to intentionally misuse grammar in order to test the ability of an automated grammar checking tool found that the students were more concerned with the AI misidentifying genuine mistakes as errors than it was with the AI providing corrections when it should have left the student to correct the work themselves, showing the attitudes of students are more centered on the quality of assessment, rather than who or what is doing the assessing.

6. Ensuring Fairness and Equity

The issues of fairness and equity are paramount in the context of AI education. Schools and teachers need to ensure that AI tools like ChatGPT are fair to all students and respect cultural and linguistic diversity. AI models often reflect the biases present in the data from which they learn. It is important for teachers to guide students in recognizing and responding to bias in AI models, for example by asking students to identify stereotypes and discriminatory language in AI generated text. Teachers may need to contact the AI tool developers or take steps themselves to remove bias from specific AI models. Ensuring that an AI model is free from bias is a complex and ongoing task, and developers may need to provide educators with guidelines or tools for filtering out unwanted content from AI generated text. When using AI tools in education, it is important for educators to ensure equal access for all students. Unfortunately, the same socio-economic and cultural factors that affect education in broader society will also affect access to AI tools. Students from more privileged backgrounds are likely to have greater exposure to AI technologies at home, potentially creating a 'digital divide'. It is essential that schools provide access to AI tools for all students and not exacerbate inequalities through differential provision of technology education. Steps may need to be taken to redress the balance, for example by providing additional AI training for students from disadvantaged backgrounds or ensuring that AI technologies are included in special education programs. AI developers may be able to assist educators by providing special pricing or donations of their products for use in schools. When considering the needs of disabled students, educators should weigh the potential benefits of AI technologies against issues of accessibility and inclusion, and involve disabled students in the decision-making process. AI technologies have great potential for accommodating diverse learning needs, but schools must ensure that the use of AI does not disadvantage or stigmatize disabled students. There is a need to develop guidelines and best practices for the use of AI technologies in special education, and to monitor the outcomes for disabled students using AI.

6.1 Addressing bias and discrimination in AI models

When integrating AI models into educational spaces, fairness and equity are key concerns. One of the most significant ethical challenges of using AI, and indeed information technology more generally, in schools is that of ensuring just and fair outcomes. Clearly, it is unjust if the use of AI worsens educational outcomes for already disadvantaged groups, especially if these outcomes are arrived at through discriminatory treatment. This raises the issue of AI and equity. It is still an open question as to whether and how the impact of an AI system on particular students or student groups can be reliably assessed, and how to prevent or mitigate negative impacts. Traditional methods of assessment have relied on disparate impact tests, which compare the outcomes of different groups on some relevant measure, thus identifying a problem if the gap between groups is too large. In the context of AI, it has been suggested that these methods could be augmented by testing for disparate mistreatment, using techniques from causal inference to uncover whether students identified as being at risk by an AI system would have better outcomes if the system had not intervened. Unfortunately, this approach is also ill defined, as there is currently no clear way to ascertain what it would mean for the system to intervene differently, and to compare the outcomes under different interventions. An alternative approach is to address the underlying causes of the overrepresentation of certain groups in the category of at-risk students. While beyond the straightforward remit of an AI model, this surely represents the most effective way of promoting positive outcomes for these students. Finally, the wider issue of global justice in AI should not be forgotten. While it is clear that it is our primary ethical duty to promote fairness and equitable outcomes for students within our own societies, we must also consider the global implications of using AI in ways which reinforce existing global inequities. This is a complex and multifaceted issue, but one clear implication is that any work on the use of AI to promote fairness in education should also consider how to do so in less privileged parts of the world.

6.2 Providing equal access to ChatGPT for all students

Research aimed at how different students use ChatGPT and its effectiveness can also provide valuable insight into how ChatGPT is implemented and used across differing student demographics. This may allow future development and funding of ChatGPT resources aimed at students who have not been using it, potentially leading to a more widespread and equal usage among students.

One way to address this issue is to integrate ChatGPT into language and literacy classes, where all students usually participate. This encourages ChatGPT use as well as providing learning and academic development. Recruiting more teachers and its availability to students after school hours is also an important step in ensuring access to ChatGPT for all students across the nation. Teachers should have access to ChatGPT so that they can learn about it and use it in their teaching. More explicit instructions can be given for use of ChatGPT for homework or studying tasks aimed at particular students as a form of personalized learning.

While ChatGPT can provide many benefits to students, it is important that all students have equal access to the software. Some students may be in higher streams, different schools or regions that are offered better resources and access. It is unfair if only certain students, usually those in higher level courses, are given this resource. All students can potentially benefit, and it is for this reason that ChatGPT should not be restricted only to the students who might need it for their studying or homework.

6.3 Considering cultural and linguistic diversity

A more preferable solution would be to have a language-specific version of GPT that students can use to find information or to ask questions in their first language. This would be a useful tool for schools that have a high population of ESOL students. As the model of ChatGPT is developed further, it is also important to consider the creation of a child AI model to simulate responses from elementary school children. This may be beneficial to all students who can then use ChatGPT as a tool to seek cross-cultural understanding. By posing questions as though they were simulating children from different cultures, they would be proactive in their attempts to gain an understanding of others and also receive information that is more appropriate to their level of understanding.

Children who are in the process of learning a language other than English and are in a program to support their literacy and numeracy in their first language may be exposed to ChatGPT with detrimental consequences. Looking at the responses from an AI model of ChatGPT may pose a threat to a student's literacy in English if they receive responses that are inaccurate or inappropriate. This also may add pressure on students who feel that they should be learning English rather than focusing on literacy and numeracy in their first language. Imposing the use of an English-based AI model on these students does not hold up to providing equity in education.

It is argued that by using ChatGPT, which is developed in the United States, there is potential for it to promote Americanisation rather than multicultural education as it is based on a response that is typically American in nature. This may mean that minority groups will be expected to adopt the mainstream way of thinking and behaving. An ideal response would be to reconsider ChatGPT for use as a tool to aid foreign language learning for speakers of other languages and remain respectful to multicultural education. This would require a reprogramming of language models to provide responses that are appropriate for the learning of a language, rather than a direct interpretation and response in contrast to original input. This would be of benefit to students wanting to learn a second language and would foster a more positive and accepting environment for multicultural education.

Schools are places where students learn to interact with people from different cultures and language backgrounds. Multicultural education is an approach to teaching and learning that is based upon democratic values and beliefs and affirms cultural pluralism within culturally diverse societies. This approach incorporates the idea that all students should have an equal opportunity to learn in a culturally inclusive environment that reflects and respects the diversity of society. For students to benefit from multicultural education, they need to take part in an experience that involves understanding different cultural and ethnic groups.

7. Training and Support for Educators

In identifying the potential challenges and risks associated with the implementation of ChatGPT in schools, the consensus from the section authors is that the blend of critical and collaborative thinking and AI learning companionship that ChatGPT promises offers incredible learning potential for students. ChatGPT could help students with their learning in ways not possible in regular classroom environments or even through one-on-one tutoring. Despite the potential for positive changes in students' learning abilities and engagement, the safe and effective use of ChatGPT in the educational environment poses many complex issues that need to be carefully considered. By identifying and addressing these issues, this paper offers an evaluation of whether the benefits of ChatGPT for students can be realized in a way that is ethically acceptable and safe for all involved.

In considering the potential challenges and risks associated with implementing ChatGPT in the classroom, this exploratory review provides insight into advanced technological concerns, professional and ethical dilemmas, as well as parental and student apprehension. As a highly innovative but unestablished technology, ChatGPT offers unparalleled learning potential for students and promises transformative changes to the way education is delivered in the future. However, critical questions regarding student safety, privacy, and engagement in an environment with AI technology, as well as the readiness and capability of educators to effectively integrate and navigate ChatGPT in the classroom, must be considered in light of the broader ethical implications of ChatGPT's adoption in schools.

7.1 Providing professional development on AI integration

A study conducted by Jessica K. Lee et al. on Singaporean educators reported the positive influence that professional development had on educators' technology integration stance and the innovative lessons they planned following the professional development. The educators in the study attended an autonomous workshop where they learned how to create an online learning environment followed by in-class coaching. Post-interviews showed that the educators felt they learned to adopt constructivist teaching practices and learned how to integrate new IT skills and knowledge in their teaching. Similar results would be expected if professional development opportunities were offered on AI integration in the classroom.

Some teachers encounter difficulties in implementing AI in classrooms because they lack prior knowledge and feel unprepared. As AI continues to grow, it is important for teachers to keep up with the latest technology and provide an environment that encourages students to continue their use of AI. This research suggests that giving teachers professional development opportunities directly related to implementing AI in the classroom will better prepare both teachers and students for the future. These opportunities can include workshops, formal education courses, and mentoring, all of which have proven to be effective. Once teachers become knowledgeable and comfortable with AI, the more likely they are to implement it in their classrooms.

7.2 Offering resources for effective implementation

The final education support approach focuses on building the next generation of AI-aware teachers. It is the most innovative and comes out of the UK, where a company called Squirrel Learning, which produces AI educational software, has been partnering with a team of computer scientists to create an AI Tutor Companion. This companion is designed to coach high school students through their programming coursework using several different AI approaches. This project was partly inspired by the author working with Squirrel Learning on an internship and sought to explore how AI in education can give students access to a uniquely tailored learning experience. This concept became an advocate for throughout the completion of this thesis project.

A major university-led research project is the Stanford EdLEADers Program, which studies the potential for artificial intelligence in K-12 education. As part of this program, they created a partnership with the high-stakes testing and assessment company, Educational Testing Service (ETS), to develop a series of case studies on how AI is and might be used in the future to address some of the challenges in education. Since educators may not be entirely sure about the benefits of AI in their own classrooms, these case studies provide clear illustrations and help educators determine what their students might gain from engaging with AI.

7.3 Fostering collaboration between educators and AI experts

Educators would benefit from being able to turn to AI experts with questions or requests for clarification, so networks and opportunities for communication would be beneficial. Especially in the early stages of AI integration in schools, an easily accessible community or group of experts who can offer advice and guidance would be very useful. Furthermore, opportunities for educators to directly input their experiences and feedback to AI developers would facilitate improvements to the available AI tools and potentially help to avoid the disconnect between developers and end users that has occurred with other educational technologies. As previously discussed, educators involving AI in their classrooms will essentially become amateur data scientists, so it would be beneficial for them to interact with AI experts and gain insight as to how to employ analytical methods and use their observations to improve teaching and student outcomes. Suggested methods by which this might occur include AI experts occasionally sitting in as research participants on the classes of educators to observe and provide feedback, or conversely having educators act as consultants to researchers seeking to improve AI tools in education.

8. Conclusion

It has been suggested that soon AI will be able to read, write, and even create more effectively than humans. I am sure in the future school they will use AI as an AI at the University of California was able to create a test that nearly passed as human test which many students education can be enhanced with. I would recommend GPT-3 to be used in school for it is safe and a trustworthy source for education of any kind. Any experimental uses of new AI however should be trailed on a smaller scale and graduated up to a larger scale of human venues. While it has vast potential to revolutionize education, it still comes with a set of ethical risks and heavy responsibilities that the larger community is not ready to accept just yet. One day, it is expected that the developing AI will be able to successfully meet such guards, and possibly be adopted across school curriculums. At present, it can be said that chatbots may provide more distractions and be more of an entertainment source rather than an academic aid. Teachers need to be certain that they are providing education that is goal-oriented with clear objectives, and not use the readily accessible AI as an easy way out from preparing work and activities. As an AI continues to get smarter, there is no doubt that one day it will have the ability to tutor students of varying academic levels in any subject.



CONTACT US

[email protected]

ADDRESS

Ireland

You may like
our most popular tools & apps