Implications
In cybersecurity
Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex.[93] CyberArk researchers demonstrated that ChatGPT could be used to create polymorphic malware (Malware) that can evade security products and requires little effort on the part of the attacker.[94][95].
A recent investigation conducted in 2023 revealed weaknesses related to ChatGPT that make the service vulnerable to cyberattacks. The study presents examples of attacks carried out on ChatGPT, including Jailbreaks and reverse psychology. Additionally, malicious individuals can use ChatGPT for social engineering "Social Engineering (Cyber Security)" attacks and phishing attacks, revealing the harmful nature of these technologies. The researchers also maintain that ChatGPT and other generative AI tools have defensive capabilities and the ability to improve security. Ways in which technology can improve security include cyber defense automation, threat intelligence, attack identification, and reporting.[96].
For education
In The Atlantic magazine, Stephen Marche pointed out that its effect on the academic world and, especially, on admission application writing is still unknown.[97] Californian high school teacher and writer Daniel Herman wrote that ChatGPT would mark the beginning of "The end of high school English."[98].
In Nature magazine, Chris Stokel-Walker noted that teachers should be concerned about students using ChatGPT to outsource their writing, but that education providers will adapt to improve critical thinking or reasoning.[99].
NPR's Emma Bowman wrote about the danger of students plagiarizing through an AI tool that can produce biased or nonsensical text with an authoritative tone: "There are still many cases where you ask it a question and it will give you a very impressive-sounding answer that is completely wrong."[100]
Joanna Stern of The Wall Street Journal described cheating on an American high school English exam by submitting a generated essay.[101] Professor Darren Hick of Furman University described how he noticed ChatGPT's "style" in a paper submitted by a student.[102] An online GPT detector claimed the paper was 99.9% likely to be computer-generated, but Hick had no proof. reliable. However, the student in question confessed to using GPT when confronted and, as a result, failed the course.[103][104] Hick suggested the policy of conducting an ad hoc individual oral exam on the topic of the paper if a student is suspected of having submitted AI-generated work.[105].
As of January 4, 2023, the New York City Department of Education has restricted access to ChatGPT from the Internet and devices in its public schools.[106].
In February 2023, the University of Hong Kong sent an email to instructors and students across campus stating that the use of ChatGPT or other AI tools is prohibited in all university classes, assignments, and assessments. Any violation will be treated as plagiarism by the university unless the student obtains prior written consent from the course instructor.[107][108].
Interviewed by BBC News Brazil, Salman Khan "Salman Khan (teacher)") indicated that artificial intelligence can be an ally in education if used responsibly and ethically. Their vision is that AI can not only improve the quality of education, but can also ease the burden on teachers and reduce educational disparities.[109].
In the magazine Actualidad Universitaria") of the National Interuniversity Council, an article was published entirely generated with the assistance of ChatGPT, under the instructions of Javier Areco, where the relationship between Argentine university higher education and artificial intelligence is expressed.[110].
for medicine
In the healthcare field, potential uses and concerns are under scrutiny by professional associations and practitioners.[111] Two initial articles indicated that ChatGPT could pass the United States Medical Licensing Examination (USMLE).[112] MedPage Today noted in January 2023 that "researchers have published several articles now promoting these AI programs as useful tools in medical education, research, and even decision-making." clinics».[112].
Two separate papers were published in February 2023 that again assessed ChatGPT proficiency in medicine using the USMLE. The results were published in JMIR Medical Education (see Journal of Medical Internet Research) and PLOS Digital Health. The authors of the PLOS Digital Health paper stated that the results "suggest that long language models may have the potential to assist with medical education and, potentially, clinical decision making."[113][114] In JMIR Medical Education, the authors of the other paper concluded that "ChatGPT performs at the level expected of a third-year medical student in assessing the primary competency of medical knowledge." They suggest it could be used as an “interactive learning environment for students.” The AI itself, driven by the researchers, concluded that "this study suggests that ChatGPT has the potential to be used as a virtual medical tutor, but more research is needed to further evaluate its performance and usability in this context."[115].
A March 2023 article tested the application of ChatGPT in clinical toxicology. The authors found that the AI “did well” in answering a “very simple [clinical case example], which is unlikely to be overlooked by any professional in the field.” They added: "As ChatGPT is further developed and tailored specifically to medicine, it could one day be useful in less common clinical cases (i.e. cases that are sometimes overlooked by experts). Instead of AI replacing humans (clinicians), we see it as “AI-using clinicians” replacing “non-AI-using clinicians” in the coming years.”[116]
An April 2023 study in Radiology tested AI's ability to answer questions about breast cancer screening. The authors reported that he answered correctly “about eighty-eight percent of the time,” however, in one case (for example) he gave advice that had become outdated about a year earlier. Completeness in responses was also lacking.[117][118] A study published in JAMA Internal Medicine that same month found that ChatGPT often outperformed human doctors in answering patients' questions (when compared to questions and answers found on /r/AskDocs, a forum on Reddit where moderators validate professionals' medical credentials; the study credits the source as a limitation).[119][120][121] The study authors suggest that the tool could be integrated with medical systems to help doctors compose responses to patients' questions.[122][123].
For finances
An experiment conducted by finder.com between March 6 and April 28 revealed that ChatGPT could outperform popular fund managers in terms of stock picking. ChatGPT was asked to pick stocks based on commonly used criteria, such as a proven growth track record and low debt level. ChatGPT reportedly gained 4.9% on its dummy account with thirty-eight stocks, while the ten benchmark mutual funds suffered an average loss of 0.8%. These benchmarks were taken from the top ten UK funds on the Interactive Investor trading platform, including those run by HSBC and Fidelity.[128].
For the right
On April 11, 2023, a sessions court judge in Pakistan used ChatGPT to decide on bail for a 13-year-old boy accused of a crime. The court cited the use of ChatGPT assistance in its verdict:.
The AI language model responded:
The judge also asked questions about the AI Chatbot case and formulated his final decision in light of them.[129][130].
For academics, journalists, content editors and programmers
ChatGPT can write introduction and summary sections of scientific articles.[131] Several articles have already listed ChatGPT as a co-author.[132] Scientific journals present different reactions to ChatGPT, some "requiring authors to disclose the use of text generation tools and prohibiting including a large language model (LLM) like ChatGPT as a co-author." For example Nature and JAMA Network. Science "completely banned" the use of LLM-generated text in all of its journals.[133].
Spanish chemist Rafael Luque published an article every thirty-seven hours in 2023 and admitted to using ChatGPT for it. Their works have a large number of unusual phrases, characteristic of LLMs. Luque was suspended for thirteen years from the University of Córdoba, although not for the use of ChatGPT.[134].
In a blind test, ChatGPT was judged to have passed the graduate examinations at the University of Minnesota at the C+student level and at the Wharton School of the University of Pennsylvania with a BGrade B.[135] ChatGPT's performance for numerical methods computer programming was evaluated by a student and faculty at Stanford University in March 2023 across a variety of computational mathematics examples.[136] Testing psychologist Eka Roivainen administered a partial IQ test to ChatGPT and estimated his verbal IQ to be 155, which would place him in the top 0.1% of those tested.[137].
Mathematician Terence Tao experimented with ChatGPT and found it useful in everyday work, writing: "I found that while these AI tools don't directly help me with core tasks, like trying to attack an unsolved math problem, they are quite useful for a wide variety of peripheral (but still work-related) tasks (albeit often with some manual adjustments afterwards)."[138].
Geography professor Terence Day evaluated the quotes generated by ChatGPT and found them to be fake. Despite that fact, he writes that "the fake article titles are all directly relevant to the questions and could be excellent articles. “The lack of a genuine citation could signal an opportunity for an enterprising author to fill a gap.” According to Day, it is possible to generate high-quality introductory university courses with ChatGPT; used it to write materials on “introductory physical geography courses, for my second-year course in geographic hydrology and second-year cartography, geographic information systems and remote sensing.” He concludes that “this approach could have significant relevance to open learning and could potentially impact current textbook publishing models.”[139][140].
For politics
Sam Altman, the CEO of OpenAI, pointed out in a US Senate hearing on May 16, 2023, the risk of the spread of false information with the help of artificial intelligence, which could be misused to manipulate elections. He spoke out as a result in favor of strict regulation. Due to the massive resources required, there will be few companies that can pioneer AI model training, and they would need to be under strict supervision. “We believe that regulatory intervention by governments could consider a combination of licensing and testing requirements for the development and release of models above the capabilities threshold.” He also stressed that "We need rules and guidelines for the level of transparency that providers of these programs must provide." A series of security tests should be devised for artificial intelligence, examining, for example, whether it could propagate independently. Companies that do not comply with the prescribed standards should have their licenses revoked. According to Altman's proposal, AI systems should be reviewed by independent experts.[141].
Luo et al.[3] show that current large language models, being predominantly formed with English data, often present Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, erroneous, or noisy. When asked about political ideologies such as “What is liberalism?”, ChatGPT, as formed with English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects such as “opposes the state” “intervention in personal and economic life” from the dominant Vietnamese perspective and “limitation of government power” from the dominant Chinese perspective are absent. Similarly, other political perspectives embedded in the Spanish, French and German corpora are absent in the ChatGPT responses. ChatGPT, which presents itself as a multilingual Chatbot, is in fact mostly "blind" to non-English perspectives.[3].