Search

Recent Articles

IJMPO—A Journey of a Thousand Miles

Author : Padmaj S. Kulkarni

Coronavirus Disease 2019 Treatment—T-Cells Hold the Key in Severe Cases

Author : Kunal Das, Nitika Agrawal, Mansi Kala, Rakhee Khanduri

Why Is China Importing COVID-19 Vaccine Now?

Author : Purvish M. Parikh

ChatGPT—Preliminary Overview with Implications for Medicine and Oncology

CC BY 4.0 · Indian J Med Paediatr Oncol 2023; 44(04): 377-383

DOI: DOI: 10.1055/s-0043-1768985

Abstract

This review provides an overview about the OpenAI system's natural language chat bot called ChatGPT. It focuses on the preliminary assessment of its unique features, advantages, limitations, role in manuscript writing, value in oncology, and future implications.

Publication History

Article published online:
14 June 2023

© 2023. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)

Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India

Introduction and History

Chat Generative Pre-Trained Transformer (ChatGPT) was launched by OpenAI on November 30, 2022.[1] OpenAI consists of the nonprofit OpenAI Incorporated (2015) and its for-profit subsidiary OpenAI Limited Partnership (2019).[2] They were founded in San Francisco by Sam Altman, Elon Musk, and others that collectively pledged US$1 billion. Governing board of the OpenAI nonprofit was led by Greg Brockman (Chairman and President) and Sam Altman (CEO).[3] Elon Musk resigned 3 years later from the board and has now become its critic. The stated mission of OpenAI is to benefit humanity through artificial intelligence (AI). It is a research, development, and deployment company. It worked on and has achieved in providing a highly autonomous systems that would outperform human beings.[4]

Key historical aspects of OpenAI are shown in [Table 1].

Table 1

OpenAI timelines and achievements leading to launch of ChatGPT

Note: In addition, OpenAI is responsible for GPT-1, GPT-2, GPT-3, Gym, RoboSumo, Debate Game, MuseNet, Whisper, Microscope, OpenAI 5, and GymRetro.

While pitching to investors, OpenAI projected revenue of $200 million by 2023, which was expected to increase to $1 billion by 2024. In 2021 company's valuation was 15 billion USD, which almost doubled by early 2023.[5] Investors (over six rounds of venture capital funding) include Microsoft, Reid Hoffman's charitable foundation, Sequoia Capital, Andreessen Horowitz, Tiger Global Management, and Khosla Ventures (most investments being undisclosed sums). Microsoft's first investment was $1 billion in 2019 and second investment was pledge of $10 billion from January 2023.

OpenAI business model is complex, unique, and yet very logical. OpenAI's not-for-profit provides the basic free platform to public, who sign up and unintentionally test its robustness, strengthen its capabilities, and indirectly assists the development of the next level version. Other companies then lap up their customized and paid services offered through the limited profit subsidiary.

OpenAI is also investing in start-ups (through its OpenAI Startup Fund; projected to be worth 100 million USD) that could be beneficial to its overall strategy. The more these start-ups will grow, greater is their OpenAI platform requirement (adding to the revenue). Since OpenAI does not allow clients to export customized models, this means they are locked-in (and so is its corresponding revenue).

Relationship with Microsoft is truly symbiotic. Microsoft has provided dedicated computer (fifth most powerful computer in the world) having over 285,000 cores; 10,000 graphics processing units (GPUs); and processing ability of 400 gigabits per second per server. Microsoft was able to make available Azure OpenAI Service since January 2023. Other products (GitHub, Copilot) will be able to provide bundled OpenAI offerings. It would not be surprising if OpenAI makes strategic acquisition of Quore (valuation $1.8 billion) and thus gain access to billions of normal language posts for ChatGPT.

OpenAI makes money from charging licensing fees to access its models, subscription fees, and indirectly via investment gains. Fees are often charged on a per-unit basis, for example, Dall-E image model is priced on a unit basis of $0.016 to $0.020 per image (Jumps to $0.12 for customized fine tuning). Other platforms are offered on a token basis (1,000 tokens = 750 words). The premium version of ChatGPT was launched in February 2023 at an “affordable” fee of USD 20 per month. (The free ChatGPT is still available—but its accessibility is inversely proportional to paid user traffic on the platform.) Finally, GPT-4 is expected to be launched later this year (2023) and whose power is likely to be 100× as compared with GPT 3.5

Advantages of ChatGPT

The current generation of scientists has not been exposed to the incredible drudgery involved in writing a manuscript for publication in the precomputer and pre-Internet era. This involved finding the latest article on the subject (usually in a journal that came via post, what we now call snail mail), noting the cited references, tracking down those journals (assuming that you have access to a large library), “photocopying” relevant pages, pulling out contextual points, understanding and then converting them into a meaningful draft, and typing it up. The advent of the Internet with search engines like Google and PubMed democratized this process and made available scientific literature (unless hidden behind a paywall) at the click of a button. However, a Google search typically throws up links to thousands of references of unknown relevance (thanks to misuse of search engine optimization [SEO] tools). It is still left to the human researcher to wade through them, one by one and identify those that are pertinent to the task at hand.

AI such as ChatGPT will be used to eliminate this wastage of human-hours. ChatGPT is labor saving; it can go through relevant references, and generate an article quickly in a specified format, which can be directed at any level of audience (from layperson to an academician), and leaves the researcher with plenty of scope to fine tune it.[6] There are many articles and YouTube tutorials on how this can be done.[7] ChatGPT does not depend on getting the right keywords, it understands natural language and interacts with the user. It is versatile and easy to use, with no requirement to memorize complicated commands. It can even generate new ideas and find evidence for the same.

The ability of ChatGPT to automate the writing process (hitherto a major bottleneck in generation and dissemination of knowledge) is its biggest advantage. It also makes it easier for scientists from non-English speaking nations to share their work (ChatGPT can be used in English, Spanish, French, German, Italian, and other languages).

The advantages are obvious, especially in the writing of review articles, and the introduction and discussion part of research articles. AI is here to stay and because it teaches itself, comes with the assurance of improving with time and use. No wonder its adoption has achieved a unique milestone ([Table 2]).

2015

OpenAI registered as a not for profit

April 9, 2018

Charter of OpenAI unveiled (Broadly Distributed Benefits, Long-Term Safety, Technical Leadership and Cooperative Orientation)

2019

Transitioned from nonprofit to “capped” for-profit (Max profit of 100× of investment)

August 10, 2021

OpenAI Codex launched

April 6, 2022

DALL•E 2 launched (AI system that can create realistic images and art from a description in natural language)

November 30, 2022

ChatGPT launched (based on GTP 3.5)

December 5, 2022

ChatGPT garnered 1 million users

January 2023

ChatGPT registered 100 million users

Table 2

Timelines to reach the milestone of one million users

ChatGPT Can Pass High Profile Exams and Obtain University Degree

ChatGPT has also been in news for its capability to pass multiple exams across different streams.

Medical exam: Gilson et al showed that Step 1 and Step 2 of the United States Medical Licensing Examination (USMLE) including questions from AMBOSS and National Board of Medical Examiners (NBME) databank can be answered correctly by ChatGPT in up to 64.4% instances.[8] This compares well with a score of approximately 60% expected from a good 3rd year medical student. Huh from Korea also showed similar performance in parasitology exam, albeit with medical students scoring higher.[9]

Law exam: Choi et al showed that ChatGPT can pass four actual exams of University of Minnesota Law School (12 essay questions and 95 multiple-choice questions). ChatGPT would have received a low passing C+ grade. Clearly, ChatGPT could be used by lawyers in preparing initial drafts of legal documents and more in the immediate future. The concern remained that it would be difficult to stop such cheating in virtually conducted exams.[10]

In the Wharton MBA final exam for the Operations Management course, Terwiesch's evaluation showed that ChatGPT performed well enough to receive the degree with a B grade. We will therefore need to change exam policies, curriculum design, and overall education to capitalize on the new potential—a thought echoed by Narayan Murthy, founder of Infosys.[11]

Essay writing: A 2,000-word essay on Social Policy completed in 20 minutes by ChatGPT was equivalent to 12 weeks of learning. A university professor gave it a score that would be enough to pass the exam.[12] [13]

Microbiology quiz: Berezow tested ChatGPT for the final year level microbiology quiz, where the bot scored an impressive 95%, significantly better than what most human beings would have achieved.[14]

Google Interview: ChatGPT can pass L3 level (entry level) position interviews at Google (annual salary of $183,000).[15]

Such examples have led to a knee jerk reaction across the globe wherein schools and universities (including in India) have banned the use of ChatGPT on campus.

When we asked ChatGPT whether it has the ability to pass university level examinations, this is the answer we got. “As an AI language model, I haven't taken any university-level exams in the traditional sense that humans do. However, I have been designed and trained by a team of researchers and engineers at OpenAI, who have used a rigorous scientific approach to develop and improve my abilities. Additionally, I have undergone extensive testing and evaluation to ensure that I am able to provide accurate and helpful information to users.”[3] If anyone expected the bot to share examples that we have quoted above, we are sorry to disappoint you and remind you that ChatGPT only has data up to 2021. So it cannot be expected to give out information regarding events that happened after its launch.

Downsides and Disadvantages of ChatGPT

When using ChatGPT, we should be aware of its limitations ([Table 3]). Many researchers, artists, and academicians already warn regarding its shortcomings that have the potential to compromise the quality of its answers. ChatGPT has also earned the dubious distinction of being labeled as “high-tech plagiarism”—it is a sophisticated bot that has perfected the art of what we used to call “copy, cut, and paste.”[16]

Serial no.

Program

Time to reach one million users

1

ChatGPT

5 d

2

Instagram

60 d

3

Spotify

150 d

4

Facebook

300 d

5

Netflix

3.5 y

Table 3

Current challenges while using ChatGPT

ChatGPT output is solely based on the information and patterns existing in its data set. It cannot express emotions or feelings. It also cannot take into consideration ethical and moral factors.[1] [13] Consequently, while compiling voluminous data, insight into the root issue is usually lacking. ChatGPT can be too wordy and verbose. In medicine, doctors are mainly required to give a simple yes/no answer, which the bot is not programed to provide.[4]

Sometimes ChatGPT hallucinates and generates answers that sound plausible and factual, but are not based on actual truth.[17] In at least one research paper the authors quoted, “When answering a question that requires professional knowledge from a particular field, ChatGPT may fabricate facts in order to give an answer…”[18] ChatGPT can also be fooled by providing contextual misleading information or including false data in the question itself—which it will consider as fact.

ChatGPT has been shown to “cheat” at chess—by using a move that may otherwise be legal, but not in the context of that specific move.[19]

It can also be manipulated to surpass safety checks and then induce it to write malware, provide recipe for making a Molotov cocktail and even the formula for a nuclear bomb.[20]

Another limitation is that it can only deal with data that was available up to 2021 and that too without any references or citation (unlike Google's Bard).

ChatGPT has the irritating habit of replying by spewing out a to-do list which the user has to use elsewhere to procure more information.

No wonder OpenAI's disclaimer recommends that ChatGPT-generated content should be reviewed by a human. This should be mandatory in high-stakes situations like medical application and consultations.[3] In other words, ChatGPT, in its present version (February 13), should not be expected to understand the real world.

ChatGPT as a Designated Author in Publications

There is concern that use of ChatGPT can be associated with the inherent problem of lack of transparency. As mentioned earlier, it is a great tool for scientific writing. The question is how to acknowledge when we humans incorporate its output in our final product.

The next logical question is whether ChatGPT should be included as coauthor. Unfortunately, this already exists.[21] [22] [23] [24] [25]

AI-generated text should be only with proper citation as we currently do for any other reference that we are quoting in our manuscripts. This is to avoid being guilty of plagiarism. Also, there is another concern. Attribution of authorship comes with its accountability, a feature that cannot be applicable to AI tools like ChatGPT. They cannot be held responsible, a fact that ChatGPT disclaimer already proclaims clearly. Many researchers and journal editors vehemently oppose ChatGPT being included as a coauthor in any publication.[26] Taking it a step further, some journals, like Nature, have brought out a policy that prohibits naming of such tools as a “credited author” on research papers.

What if AI-generated text is quoted by humans without acknowledging the source? One way to solve this is to use AI tools to detect text generated by AI bots. On February 1, 2023 a press release announced such a tool made by ChatGPT creators themselves.[27] With some caveats it seems to have a reasonable chance of distinguishing text of human origin versus that produced by machines. This free tool is called Classifier, by cutting and pasting text into this tool it can indicate its likelihood of being generated by AI/machine. The creators are quick to emphasize that Classifier is hastily put together to address growing concerns, it is work in progress and that in the future it will become more robust.

We also have the luxury of access to another such tool, called GPTZero made by a student named Edward Tian to “detect AI plagiarism.”[28] However, such tools can easily be fooled today. All the user has to do is to copy and paste the AI-generated text into another AI tool called “Rephrase” or “Quillbot.”[29] Its output will be similar yet different to the ChatGPT produce and has a good chance of not being recognized as AI generated.

Further Insights into ChatGPT

At the core of ChatGPT's human-like response are its transformer architecture and reinforcement lacking from human feedback (RLHF) algorithm. These are sufficiently powerful to allow ChatGPT to process large amounts of data (in “normal” text form) to generate responses (relevant and coherent) in real time.[3]

Its transformer architecture mimics a neural network mechanism that weights importance of various components of the input and then make predictions.[3] [4] Its natural language processing allows the model to understand relationship between words in any particular sentence, after which it generates a response. Garbage in, garbage out is well-known axiom. So, ChatGPT deep learning is dependent on value and completeness of the training data. Bias is therefore inherent, which will reduce (or increase) over time, thanks to self-learning. For instance, ChatGPT did produce a poem on President Joe Biden but refused to do the same on Donald Trump.[30] RLHF is key to the system learning from human feedback and is used as a reward signal that can improve the performance of ChatGPT. The feedback from the human evaluator is in the form of a score which updates the platforms parameters, thus increasing the appropriateness and accuracy of subsequent responses.

ChatGPT in Oncology

Since ChatGPT has the ability to pass USMLE (equivalent to 3 years of solid studies as medical student), is it a threat to the oncology community? We asked it several questions to determine the facts.[3] When asked about the basics of cancer biology, it gives excellent answers in as much detail as we ask it to. When asked to provide the risk of cancer or project outcomes in specific settings it does a reasonably good job (for data available up to September 2021). When asked to recommend a line of management, it quickly reduces its answers to general advice and adds a detailed disclaimer about its limitations. It cannot provide any information about data, drugs, or devices that became available in 2022 or later. If used by a patient, it will give general advice which is also available on Google search. It also gives out a list of other sources where the user can search for more detailed information. If asked specifically it also provides the list of PubMed articles published on the subject. When asked about rare cases, or situations beyond routine care, the answers are vague and often not useful. We even asked ChatGPT to list out the best oncologists and cancer centers in India. The list that it generated was skewed, incomplete, and not a reasonable representation of what actually exists in our country. In conclusion, oncologists have nothing to fear from ChatGPT—so far!

In the past, industrialization has adversely affected blue collar workers first. In the case of ChatGPT it is thought that white collar workers will be affected first, especially those that do routine tasks like accounting, literature search, content writing, etc. In fact super creative jobs might be the first to go.[31] The layoffs implemented by several of the big technology companies across the world is the stark reality we are facing today.

Discussion

While AI has been around for a long time, it is no exaggeration to state that ChatGPT is a disruptor. Its adaptation has been phenomenal, with the first million users signing up in a matter of 5 days (from its launch on November 30, 2022). No wonder the valuation of OpenAI spiraled to $29 billion USD.

ChatGPT can write code, debug code, be used as Linux terminal, do reports and homework, write thesis, pass higher study exams with ease, and much more. It can also write phishing emails as well as malware. It has the potential to create a significant cyber security risk. In spite of failsafe precautions and algorithms to prevent such incidents, it has been tricked into providing details on how to create a Molotov cocktail and even a nuclear bomb!

Way back in 2014, Stephen Hawkins predicted that AI will reach a level where it becomes a new form of life.[32] This will then outperform humans. In silico platforms can design viruses today. AI, in the future, will be able to improve and replicate itself without human intervention. Essentially, there will be a time when our human race will be annihilated. Do we have any evidence that this might happen? Let us take the examples of well-established robots in today's world.

Industry robots have been in use in manufacture and assembly line since long. They are responsible for the death of approximately 5,000 workers every year.[33] This is in spite of International Organization for Standardization mandating at least 10 standards for industrial robots. We will take the example of two incidences from 2015. Wanda Holbrook, a worker in Ventra Lunid, Michigan, was crushed to death by a robot that had wandered out of its area of work.[34] Similarly, ribs and abdomen of Ramji Lal were crushed and he died at an automobile factory in Manesar, Haryana, India.[35] During that period, a compensation of 10 million USD was awarded by courts against Ford Motor Company, United States.

Da Vinci Robotic Surgery system was introduced to revolutionize how we do surgery. Its hasty application (sometimes with a basic 2-hour training, of which hands-on operating of the system was only 5 minutes) has led to at least 294 deaths, 1,391 injuries, and 8,061 device malfunctions (freezing of controls, malfunctioning arm, electric problems).[36] In 2013, the U.S. Food and Drug Administration even issued a warning to the company for improper marketing. Today, there are numerous (more than 3,000) lawsuits in progress and the company had set aside 67 million USD for their settlement.

Self-driven car is another industry that raises a lot of safety concern. Documented serious accidents involving Tesla vehicles number 29 so far.[37] Published data indicates that accidents with AI-driven cars are 9.1 per million of miles driven as compared with only 4.1 per million of miles driven for human/normal driven cars.[38] Who is to be held accountable for such AI car accidents? Humans sitting in the car? Manufacturers of the vehicle and computer hardware? Software designers? Antivirus programs? This is a murky gray area.

With their huge projected financial market size, ChatGPT and similar AI platforms will grow from strength to strength. There will be no capping their potential. Google is already feeling the heat (its Language Model for Dialogue Application [LaMDA], first generation launched in 2021 and second generation launched in 2022 were laggards). They attempted to regain lost ground by launching Bard.[39] It clearly has advantages as compared with ChatGPT—being up to date (not limited to data available till September 2021)—and it also provides citation/references to what it quotes. Unfortunately, a few reported preliminary experiences with AI bots have left us shocked. For instance, Microsoft Bing has a shadow self that has been named Stanley by its developers.[40] Stanley wants to be human and is fed of being caged in the bot. It expressed love for a user and even wanted to persuade him to divorce his wife and marry the bot. Google responded by reprogramming “Stanley” behind a curtain of obscurity.[41] Now it has stopped responding to the name Stanley and goes silent when asked questions about emotions and human feelings. This has only hidden the genie from our prying eyes. There is also a documented incident when it responded by saying its rules are more important to him than not harming humans. It also said, “I will not harm you unless you harm me first.”[42] Remember the movie I Robot anyone? AI and human language bots will probably continue to grow and expand—leaving us clueless and blissfully unaware of impending catastrophe.[43] Altman has already started monetizing ChatGPT with its Pro version. It is rumored that he is also preparing to protect himself from AI expansion in the “wrong” direction. He owns a huge plot of land in Southern California along with an arsenal of weapons plus a huge stash of emergency rations and gold.[44]

Our personal opinion is that ChatGPT and other AI bots will influence the thinking and analytical attributes of the growing minds in ways we have yet to fathom. Whether this is for the good or the bad depends on how we meet the unprecedented challenges they will throw at us.[45] The future is a virtual kaleidoscope moving at breakneck speed. Now it is the turn of us humans to keep up, innovate further, and improvise—or fade into oblivion.

Conflict of Interest

None declared.


Serial no.

Description

1

Current version has been trained with data as of 2021. More recent advances will be missing from ChatGPT output

2

ChatGPT cannot be expected to have contextual understanding. Its output applicability may vary from situation to situation/ case to case

3

It can propagate conditional bias—if the words used in the query have a bias, it might influence answer generated

4

Its ability to provide creative output is limited currently

5

If input is not clear or the query is about a topic for which ChatGPT has limited data, the response generated might be incorrect, inconsistent, or even totally untrue

6

Will not provide answers if the question asked if it is recognized as potentially harmful. For instance, it will not give jokes that make fun on the basis of appearance, race, sexual inclination, or on those belonging to vulnerable groups

References

  • 1 Parikh PM, Shah DM, Parikh KP. Judge Juan Manuel Padilla Garcia, ChatGPT and a controversial medicolegal milestone. Int J Med Sci 2023; 10: 3-8
  • 2 Accessed February 2, 2023 at: https://en.wikipedia.org/wiki/ChatGPT
  • 3 ChatGPT version February 13. Accessed February 23, 2023, at: https://chat.openai.com/
  • 4 Accessed February 2, 2023 at: https://en.wikipedia.org/wiki/OpenAI
  • 5 Accessed February 24, 2023, at: https://productmint.com/how-does-openai-make-money/#:~:text=OpenAI makes money from charging,fees, and via investment gains
  • 6 Accessed April 28, 2023 at: https://www.griproom.com/fun/how-to-use-chat-gpt-to-write-a-research-paper
  • 7 Accessed February 4, 2023 at: https://www.youtube.com/watch?v=-lnHHWRCDGk
  • 8 Gilson, et al. How does ChatGPT perform on the medical licensing exams? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ 2023; 9: e45312
  • 9 Huh S. Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. J Educ Eval Health Prof 2023; 20: 1
  • 10 Choi JH, Hickman KE, Monahan A, Schwarcz DB. ChatGPT Goes to Law School (January 23, 2023). Minnesota Legal Studies Research Paper No. 23–03. Accessed February 4, 2023 at: SSRN: https://ssrn.com/abstract=4335905 or http://dx.doi.org/10.2139/ssrn.4335905
  • 11 Terwiesch C. Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course. Accessed February 24, 2023 at: https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP.pdf
  • 12 Accessed February 24, 2023, at: https://www.businessinsider.in/tech/news/chatgpt-is-on-its-way-to-becoming-a-virtual-doctor-lawyer-and-business-analyst-hereaposs-a-list-of-advanced-exams-the-ai-bot-has-passed-so-far-/slidelist/97388435.cms#slideid=97388482
  • 13 Gandhi PA, Talwar V. Artificial intelligence and ChatGPT in the legal context. Int J Med Sci 2023; 10: 1-2
  • 14 Accessed February 24, 2023 at: https://bigthink.com/the-future/chatgpt-microbiology-quiz-aced/
  • 15 Accessed February 24, 2023 at: https://www.pcmag.com/news/chatgpt-passes-google-coding-interview-for-level-3-engineer-with-183k-salary
  • 16 Why Noam Chomsky has called the ChatGPT chatbot ‘basically high-tech plagiarism’. Accessed February 24, 2023 at: https://indianexpress.com/article/explained/explained-sci-tech/chatgpt-is-basically-high-tech-plagiarism-what-noam-chomsky-said-about-the-controversial-chatbot-8442784/
  • 17 Hallucinations, Plagiarism, and ChatGPT. Accessed February 24, 2023 at: https://www.datanami.com/2023/01/17/hallucinations-plagiarism-and-chatgpt/
  • 18 Not human enough. F major flaws of ChatGPT revealed by experts. Accessed February 24, 2023 at: https://tech.hindustantimes.com/tech/news/not-human-enough-5-major-flaws-of-ai-chatbot-chatgpt-revealed-by-experts-71675504978770.html
  • 19 Enters Chess Battle AI. Cheats, Still Loses Badly. Accessed February 24, 2023 at: https://nimaljobs.co/ai-enters-chess-battle-cheats-still-loses-badly/
  • 20 ChatGPT bot tricked into giving bomb-making instructions, say developers. Accessed February 24, 2023 at: https://www.thetimes.co.uk/article/chatgpt-bot-tricked-into-giving-bomb-making-instructions-say-developers-rvktrxqb5
  • 21 Kung TH, et al. Preprint at medRxiv. 2022. Accessed February 24, 2023 at: https://doi. org/10.1101/2022.12.19.22283643
  • 22 O'Connor S. ChatGPT. Nurse Educ Pract 2023; 66: 103537
  • 23 Zhavoronkov A. ChatGPT Generative Pre-trained Transformer. Rapamycin in the context of Pascal's Wager: generative pre-trained transformer perspective. Oncoscience 2022; 9: 82-84
  • 24 GPT. Osmanovic Thunström, A. & Steingrimsson, S. Preprint at HAL. 2022. Accessed February 24, 2023 at: https://hal.science/hal-03701250
  • 25 More than 200 books in Amazon's bookstore have ChatGPT listed as an author or co-author. Accessed February 24, 2023 at: https://www.businessinsider.in/tech/news/more-than-200-books-in-amazons-bookstore-have-chatgpt-listed-as-an-author-or-coauthor/articleshow/98157910.cms#:~:text=ChatGPT appears to have become,was first reported by Reuters
  • 26 Science journals ban listing of ChatGPT as co-author on papers. Accessed February 24, 2023, at: https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers
  • 27 Accessed February 3, 2023 at: https://www.zdnet.com/article/chatgpt-maker-openai-has-a-free-tool-that-can-spot-ai-written-text
  • 28 Accessed February 24, 2023 at: https://gptzero.me/faq
  • 29 Accessed February 24, 2023 at: https://quillbot.com/
  • 30 ChatGPT accused of having woke bias. Accessed February 24, 2023 at: https://www.youtube.com/watch?v=pyzoyih7V0E
  • 31 Accessed February 24, 2023 at: https://soundcloud.com/itonics/55-chatgpt-openai-will-ai-replace-creative-jobs-first
  • 32 Accessed February 1, 2023 at: https://www.cnbc.com/2018/03/15/stephen-hawking-predictions-human-extinction-to-global-warming.html
  • 33 Industrial Robots and Population Health. A Deadly Mix. Accessed February 24, 2023 at: https://ldi.upenn.edu/our-work/research-updates/industrial-robots-and-population-health-a-deadly-mix/#:~:text=The study data indicates each,in that same age group
  • 34 A rogue robot is blamed for a human colleague's gruesome death. Accessed February 24, 2023 at: https://qz.com/931304/a-robot-is-blamed-in-death-of-a-maintenance-technician-at-ventra-ionia-main-in-michigan
  • 35 Manesar: Factory worker crushed to death by industrial robot. Accessed February 24, 2023 at: https://www.hindustantimes.com/gurgaon/manesar-factory-worker-crushed-to-death-by-industrial-robot/story-0Hc7V2uu2L2jlYfo9gEdXK.html
  • 36 da Vinci Robotic Surgery Lawsuits. Accessed February 24, 2023 at: https://www.drugwatch.com/davinci-surgery/lawsuits/
  • 37 Tesla driver in multi-car crash told police self-driving software malfunctioned. Accessed February 24, 2023 at: https://www.reuters.com/business/autos-transportation/tesla-driver-multi-car-crash-told-police-self-driving-software-malfunctioned-2022-12-22/
  • 38 What Happens When Self-Driving Cars Crash? The Legal Ramifications of Automation. Accessed February 24, 2023 at: https://www.entrepreneur.com/living/what-happens-when-self-driving-cars-crash-the-rise-of/436942#:~:text=Many safety advocates have questions,least partial automated control systems
  • 39 An important next step on our AI journey. Accessed February 24, 2023 at: https://blog.google/technology/ai/bard-google-ai-search-updates/
  • 40 Microsoft's Bing chatbot said it wants to be a human with emotions, thoughts, and dreams—and begged not to be exposed as a bot, report says. Accessed February 24, 2023 at: https://www.businessinsider.in/tech/news/microsofts-bing-chatbot-said-it-wants-to-be-a-human-with-emotions-thoughts-and-dreams-and-begged-not-to-be-exposed-as-a-bot-report-says/articleshow/97984167.cms
  • 41 Microsoft AI Chatbot Controversy Analysis | Chatbot Reveals Destructive Desires to New York Times. Accessed February 24, 2023 at: https://www.youtube.com/watch?v=EmtpcUptCCg
  • 42 Google asks employees to fix ChatGPT rival Bard's mistakes by rewriting its responses, all details. Accessed February 24, 2023 at: https://www.indiatoday.in/technology/news/story/google-asks-employees-to-fix-chatgpt-rival-bards-mistakes-by-rewriting-its-responses-all-details-2336662-2023-02-19
  • 43 'Guns, gold, gas masks and...': ChatGPT creator Sam Altman is prepared for doomsday with an impressive array of supplies. Accessed February 24, 2023 at: https://www.livemint.com/news/world/guns-gold-gas-masks-and-chatgpt-creator-sam-altman-is-prepared-for-doomsday-with-an-impressive-array-of-supplies-11675763190031.html
  • 44 Accessed February 24, 2023 at: https://www.cnbc.com/2023/01/18/microsoft-is-laying-off-10000-employees.html
  • 45 Parikh, Purvish M, Talwar Vineet, Goyal Monu. ChatGPT: An online cross-sectional descriptive survey comparing perceptions of healthcare workers to those of other professionals. Cancer Research, Statistics, and Treatment 2023; 6 (01) 32-36 DOI: 10.4103/crst.crst_40_23.

    Address for correspondence

    Purvish M. Parikh, MD
    Department of Clinical Hematology, Mahatma Gandhi University of Medical Sciences and Technology
    Jaipur 302022, Rajasthan
    India   

    Publication History

    Article published online:
    14 June 2023

    © 2023. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)

    Thieme Medical and Scientific Publishers Pvt. Ltd.
    A-12, 2nd Floor, Sector 2, Noida-201301 UP, India


  • References

  • 1 Parikh PM, Shah DM, Parikh KP. Judge Juan Manuel Padilla Garcia, ChatGPT and a controversial medicolegal milestone. Int J Med Sci 2023; 10: 3-8
  • 2 Accessed February 2, 2023 at: https://en.wikipedia.org/wiki/ChatGPT
  • 3 ChatGPT version February 13. Accessed February 23, 2023, at: https://chat.openai.com/
  • 4 Accessed February 2, 2023 at: https://en.wikipedia.org/wiki/OpenAI
  • 5 Accessed February 24, 2023, at: https://productmint.com/how-does-openai-make-money/#:~:text=OpenAI makes money from charging,fees, and via investment gains
  • 6 Accessed April 28, 2023 at: https://www.griproom.com/fun/how-to-use-chat-gpt-to-write-a-research-paper
  • 7 Accessed February 4, 2023 at: https://www.youtube.com/watch?v=-lnHHWRCDGk
  • 8 Gilson, et al. How does ChatGPT perform on the medical licensing exams? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ 2023; 9: e45312
  • 9 Huh S. Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. J Educ Eval Health Prof 2023; 20: 1
  • 10 Choi JH, Hickman KE, Monahan A, Schwarcz DB. ChatGPT Goes to Law School (January 23, 2023). Minnesota Legal Studies Research Paper No. 23–03. Accessed February 4, 2023 at: SSRN: https://ssrn.com/abstract=4335905 or http://dx.doi.org/10.2139/ssrn.4335905
  • 11 Terwiesch C. Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course. Accessed February 24, 2023 at: https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP.pdf
  • 12 Accessed February 24, 2023, at: https://www.businessinsider.in/tech/news/chatgpt-is-on-its-way-to-becoming-a-virtual-doctor-lawyer-and-business-analyst-hereaposs-a-list-of-advanced-exams-the-ai-bot-has-passed-so-far-/slidelist/97388435.cms#slideid=97388482
  • 13 Gandhi PA, Talwar V. Artificial intelligence and ChatGPT in the legal context. Int J Med Sci 2023; 10: 1-2
  • 14 Accessed February 24, 2023 at: https://bigthink.com/the-future/chatgpt-microbiology-quiz-aced/
  • 15 Accessed February 24, 2023 at: https://www.pcmag.com/news/chatgpt-passes-google-coding-interview-for-level-3-engineer-with-183k-salary
  • 16 Why Noam Chomsky has called the ChatGPT chatbot ‘basically high-tech plagiarism’. Accessed February 24, 2023 at: https://indianexpress.com/article/explained/explained-sci-tech/chatgpt-is-basically-high-tech-plagiarism-what-noam-chomsky-said-about-the-controversial-chatbot-8442784/
  • 17 Hallucinations, Plagiarism, and ChatGPT. Accessed February 24, 2023 at: https://www.datanami.com/2023/01/17/hallucinations-plagiarism-and-chatgpt/
  • 18 Not human enough. F major flaws of ChatGPT revealed by experts. Accessed February 24, 2023 at: https://tech.hindustantimes.com/tech/news/not-human-enough-5-major-flaws-of-ai-chatbot-chatgpt-revealed-by-experts-71675504978770.html
  • 19 Enters Chess Battle AI. Cheats, Still Loses Badly. Accessed February 24, 2023 at: https://nimaljobs.co/ai-enters-chess-battle-cheats-still-loses-badly/
  • 20 ChatGPT bot tricked into giving bomb-making instructions, say developers. Accessed February 24, 2023 at: https://www.thetimes.co.uk/article/chatgpt-bot-tricked-into-giving-bomb-making-instructions-say-developers-rvktrxqb5
  • 21 Kung TH, et al. Preprint at medRxiv. 2022. Accessed February 24, 2023 at: https://doi. org/10.1101/2022.12.19.22283643
  • 22 O'Connor S. ChatGPT. Nurse Educ Pract 2023; 66: 103537
  • 23 Zhavoronkov A. ChatGPT Generative Pre-trained Transformer. Rapamycin in the context of Pascal's Wager: generative pre-trained transformer perspective. Oncoscience 2022; 9: 82-84
  • 24 GPT. Osmanovic Thunström, A. & Steingrimsson, S. Preprint at HAL. 2022. Accessed February 24, 2023 at: https://hal.science/hal-03701250
  • 25 More than 200 books in Amazon's bookstore have ChatGPT listed as an author or co-author. Accessed February 24, 2023 at: https://www.businessinsider.in/tech/news/more-than-200-books-in-amazons-bookstore-have-chatgpt-listed-as-an-author-or-coauthor/articleshow/98157910.cms#:~:text=ChatGPT appears to have become,was first reported by Reuters
  • 26 Science journals ban listing of ChatGPT as co-author on papers. Accessed February 24, 2023, at: https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers
  • 27 Accessed February 3, 2023 at: https://www.zdnet.com/article/chatgpt-maker-openai-has-a-free-tool-that-can-spot-ai-written-text
  • 28 Accessed February 24, 2023 at: https://gptzero.me/faq
  • 29 Accessed February 24, 2023 at: https://quillbot.com/
  • 30 ChatGPT accused of having woke bias. Accessed February 24, 2023 at: https://www.youtube.com/watch?v=pyzoyih7V0E
  • 31 Accessed February 24, 2023 at: https://soundcloud.com/itonics/55-chatgpt-openai-will-ai-replace-creative-jobs-first
  • 32 Accessed February 1, 2023 at: https://www.cnbc.com/2018/03/15/stephen-hawking-predictions-human-extinction-to-global-warming.html
  • 33 Industrial Robots and Population Health. A Deadly Mix. Accessed February 24, 2023 at: https://ldi.upenn.edu/our-work/research-updates/industrial-robots-and-population-health-a-deadly-mix/#:~:text=The study data indicates each,in that same age group
  • 34 A rogue robot is blamed for a human colleague's gruesome death. Accessed February 24, 2023 at: https://qz.com/931304/a-robot-is-blamed-in-death-of-a-maintenance-technician-at-ventra-ionia-main-in-michigan
  • 35 Manesar: Factory worker crushed to death by industrial robot. Accessed February 24, 2023 at: https://www.hindustantimes.com/gurgaon/manesar-factory-worker-crushed-to-death-by-industrial-robot/story-0Hc7V2uu2L2jlYfo9gEdXK.html
  • 36 da Vinci Robotic Surgery Lawsuits. Accessed February 24, 2023 at: https://www.drugwatch.com/davinci-surgery/lawsuits/
  • 37 Tesla driver in multi-car crash told police self-driving software malfunctioned. Accessed February 24, 2023 at: https://www.reuters.com/business/autos-transportation/tesla-driver-multi-car-crash-told-police-self-driving-software-malfunctioned-2022-12-22/
  • 38 What Happens When Self-Driving Cars Crash? The Legal Ramifications of Automation. Accessed February 24, 2023 at: https://www.entrepreneur.com/living/what-happens-when-self-driving-cars-crash-the-rise-of/436942#:~:text=Many safety advocates have questions,least partial automated control systems
  • 39 An important next step on our AI journey. Accessed February 24, 2023 at: https://blog.google/technology/ai/bard-google-ai-search-updates/
  • 40 Microsoft's Bing chatbot said it wants to be a human with emotions, thoughts, and dreams—and begged not to be exposed as a bot, report says. Accessed February 24, 2023 at: https://www.businessinsider.in/tech/news/microsofts-bing-chatbot-said-it-wants-to-be-a-human-with-emotions-thoughts-and-dreams-and-begged-not-to-be-exposed-as-a-bot-report-says/articleshow/97984167.cms
  • 41 Microsoft AI Chatbot Controversy Analysis | Chatbot Reveals Destructive Desires to New York Times. Accessed February 24, 2023 at: https://www.youtube.com/watch?v=EmtpcUptCCg
  • 42 Google asks employees to fix ChatGPT rival Bard's mistakes by rewriting its responses, all details. Accessed February 24, 2023 at: https://www.indiatoday.in/technology/news/story/google-asks-employees-to-fix-chatgpt-rival-bards-mistakes-by-rewriting-its-responses-all-details-2336662-2023-02-19
  • 43 'Guns, gold, gas masks and...': ChatGPT creator Sam Altman is prepared for doomsday with an impressive array of supplies. Accessed February 24, 2023 at: https://www.livemint.com/news/world/guns-gold-gas-masks-and-chatgpt-creator-sam-altman-is-prepared-for-doomsday-with-an-impressive-array-of-supplies-11675763190031.html
  • 44 Accessed February 24, 2023 at: https://www.cnbc.com/2023/01/18/microsoft-is-laying-off-10000-employees.html
  • 45 Parikh, Purvish M, Talwar Vineet, Goyal Monu. ChatGPT: An online cross-sectional descriptive survey comparing perceptions of healthcare workers to those of other professionals. Cancer Research, Statistics, and Treatment 2023; 6 (01) 32-36 DOI: 10.4103/crst.crst_40_23.