Acessibilidade / Reportar erro

ChatGPT for medical applications and urological science

To the editor,

A Generative Pre-Trained Transformer (GPT) is an artificial intelligence (AI) algorithm designed to understand and generate human-like language. ChatGPT is a free and publicly available large language model developed by OpenAI. It uses advanced natural language processing algorithms, is trained on a vast corpus of data, and is not free of limitations and biases. AI is in its infancy and there is enough potential to be developed that will undoubtedly change our practices and human life. As an impactful tool, it can be used for the good and for the bad, and its responsible and moderated use is critical.

What is chatGPT

A Generative Pre-Trained Transformer (GPT) is an artificial intelligence (AI) algorithm designed to understand and generate human-like language. ChatGPT is a free and publicly available large language model developed by OpenAI. It uses advanced natural language processing algorithms, trained on a vast corpus of data (11 GPT-4 is OpenAI's most advanced system, producing safer and more useful OpenAI. [Internet]. Avaliable at. <https://openai.com>. accessed on 06 mar 2023
https://openai.com>...
).

Training data includes a wide range of questions and prompts and a diverse set of texts, such as books, articles, and websites, allowing knowledge in many different domains. As an AI language model, its goal is to assist and provide useful responses to users who interact (22 Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, et al. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020;33:1877-1901.).

A PubMed search on March, 03, 2023 using the word “chatGPT” displayed 5 publications in 2022 and 72 in the first 2 months of the year of 2023 and another search on March, 09, 2023 retrieved additional 22 new documents. The studies range from literature reviews on the issue to clinical case reports that used chatGPT as a writing tool.

ChatGPT formedical applications

ChatGPT has several potential applications in the field of medicine and healthcare. As an AI language model, ChatGPT has been trained on a vast amount of medical text data, including research papers, clinical reports, and electronic health records. This means that it has a deep knowledge base on medical topics and can generate text that may be instructive (33 Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. 2018;25:1248-58.).

One potential application of ChatGPT in medicine is clinical decision support. By inputting patient data and symptoms, ChatGPT can generate recommendations for diagnosis and treatment based on the latest medical research and clinical guidelines. As any tool, when complementary to human expertise it can help improve the accuracy and efficiency of medical diagnosis and treatment, with potential to lead to better patient outcomes (44 Xu L, Sanders L, Li K, Chow JCL. Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR Cancer. 2021;7:e27850.).

Another application of ChatGPT in medicine is natural language processing of electronic health records (EHRs). EHRs contain a vast amount of unstructured text data, and extracting meaningful information from these records can be time-consuming and challenging. ChatGPT can help automate this process by analyzing EHRs and identifying key information, such as patient diagnoses, treatments, and outcomes (55 Yang LWY, Ng WY, Lei X, Tan SCY, Wang Z, Yan M, et al. Development and testing of a multi-lingual Natural Language Processing-based deep learning system in 10 languages for COVID-19 pandemic crisis: A multi-center study. Front Public Health. 2023;11:1063466.).

ChatGPT can also assist with patient education by generating easy-to-understand explanations of medical conditions and treatments. This can be particularly helpful for patients who may have difficulty understanding complex medical terminology or who may feel overwhelmed by the amount of medical information available online (66 Almalki M, Azeez F. Health Chatbots for Fighting COVID-19: a Scoping Review. Acta Inform Med. 2020;28:241-7.).

Overall, ChatGPT has several potential applications in medicine and healthcare, and its ability to generate text on a wide range of medical topics makes it a valuable tool for medical professionals, researchers, and patients alike. However, it's important to note that ChatGPT should be used as a complementary tool to human expertise and should not be relied upon as a substitute for professional medical advice or diagnosis.

CHATGPT FOR MEDICAL WRITING

ChatGPT can be a valuable resource for medical writing in several ways. As an AI language model, ChatGPT has been trained on a vast amount of data, including medical literature, research articles, and clinical reports. This means that it has a vast knowledge base on medical topics and can generate text that is precise (77 Coiera E, Liu S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in health-care. Cell Rep Med. 2022;3:100860.).

One way ChatGPT can help with medical writing is by providing assistance with grammar and syntax. Medical writing often involves complex terminology and jargon, and ChatGPT can help ensure that the language used in a medical document is grammatically correct and easy to understand (88 van Buchem MM, Boosman H, Bauer MP, Kant IMJ, Cammel SA, Steyerberg EW. The digital scribe in clinical practice: a scoping review and research agenda. NPJ Digit Med. 2021;4:57.).

ChatGPT can also help with writing medical reports, research papers, and other types of medical documents by providing suggestions for structure, formatting, and organization. For example, it can suggest appropriate headings and subheadings, provide examples of effective introductions and conclusions, and help ensure that the document flows logically and coherently (99 Perlis N, Finelli A, Lovas M, Berlin A, Papadakos J, Ghai S, et al. Creating patient-centered radiology reports to empower patients undergoing prostate magnetic resonance imaging. Can Urol Assoc J. 2021;15:108-13.).

Furthermore, ChatGPT can help with summarizing complex medical information in a way that is easy to understand for a lay audience. This can be particularly helpful for medical writers who are creating patient education materials or other types of health-related content (1010 Hasan MM, Islam MU, Sadeq MJ, Fung WK, Uddin J. Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment. Sensors (Basel). 2023;23:527.).

Overall, ChatGPT's ability to generate grammatically correct text on a wide range of medical topics can be a valuable resource for medical writers looking to improve the quality and effectiveness of their work.

ChatGPT for urological science

ChatGPT can be a useful resource for uro-logical science in several ways. As an AI language model, ChatGPT has been trained on a vast amount of text data, including scientific research papers, medical textbooks, and other authoritative sources on urological science. This means that it has a deep knowledge base on urological science (1111 Chen J, Remulla D, Nguyen JH, Dua A, Liu Y, Dasgupta P, et al. Current status of artificial intelligence applications in urology and their potential to influence clinical practice. BJU Int. 2019;124:567-77. Erratum in: BJU Int. 2020;126:647.).

One way ChatGPT can help with urological science is by providing assistance with writing research papers and clinical reports. Urological science involves complex medical terminology and jargon, and ChatGPT can help ensure that the language used in a research paper or clinical report is scientifically accurate (1212 Eun SJ, Kim J, Kim KH. Applications of artificial intelligence in urological setting: a hopeful path to improved care. J Exerc Rehabil. 2021;17:308-12.).

ChatGPT can also help urological scientists with analyzing and interpreting data. By inputting data from urological studies or clinical trials, ChatGPT can generate text that provides insights into the findings, significance, and implications of the data. This can be particularly helpful for uro-logical scientists who need to communicate their research findings in a clear and concise way (1313 Chu TN, Wong EY, Ma R, Yang CH, Dalieh IS, Hung AJ. Exploring the Use of Artificial Intelligence in the Management of Prostate Cancer. Curr Urol Rep. 2023; Epub ahead of print.).

Furthermore, ChatGPT can assist with creating patient education materials on urological topics. Urological conditions can be complex and difficult to understand for patients, and ChatGPT can generate text that explains urological conditions and treatments in a clear and understandable way (1414 Thenault R, Kaulanjan K, Darde T, Rioux-Leclercq N, Bensalah K, Mermier M, et al. The Application of Artificial Intelligence in Prostate Cancer Management—What Improvements Can Be Expected? A Systematic Review. Applied Sciences. 2020;10:6428., 1515 Gabrielson AT, Odisho AY, Canes D. Harnessing Generative Artificial Intelligence to Improve Efficiency Among Urologists: Welcome ChatGPT. J Urol. 2023; Epub ahead of print.). Overall, ChatGPT can be a valuable resource for urological scientists looking to improve the quality and effectiveness of their work.

ChatGPT and references

ChatGPT is an AI language model that generates text based on patterns and statistical models learned from a large corpus of text data. While it can provide accurate and informative information, it does not have the ability to add exact references or citations to its writing.

However, as an AI language model, ChatGPT is trained on a vast amount of text data, including scientific research papers, academic journals, and other authoritative sources. This means that the information it provides is typically based on reliable and credible sources.

It's important to note that it is ultimately up to the user to ensure that the information is properly cited and referenced in any written work. If you are using ChatGPT to generate content for a research paper, article, or other type of written work, it's important to carefully review and fact-check the information it provides and to include proper citations and references to any sources used.

In summary, ChatGPT is a powerful tool that can provide information on a wide range of topics, but it is the user's responsibility to ensure that the information is properly cited and referenced in any written work.

Ethics in the use of chatGPT

As an AI language model, ChatGPT has the potential to be a powerful tool for many different applications. However, there are important ethical considerations that must be taken into account in the use of ChatGPT (1616 Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: A mapping review. Soc Sci Med. 2020;260:113172.).

First and foremost, the data that is used to train ChatGPT is critical. To ensure that ChatGPT is ethical and unbiased, the data that is used to train the model must be carefully selected and scrutinized. Training data that includes biased or discriminatory language can result in a model that perpetuates those biases and reinforces harmful stereotypes (1717 Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1:389-99. [Internet]. Available at. <https://www.nature.com/articles/s42256-019-0088-2>.
https://www.nature.com/articles/s42256-0...
).

Another important ethical consideration is the potential misuse of ChatGPT. While ChatGPT can be used to provide helpful information and support to users, it can also be used to spread misinformation, hate speech, and other harmful content. It is important for developers and users of ChatGPT to be aware of this potential and take steps to mitigate it, such as implementing content moderation tools and educating users on responsible use.

There are also privacy and security concerns associated with the use of ChatGPT. Conversations with ChatGPT can contain sensitive personal information, and it is important for developers to take steps to protect this data from unauthorized access or use. Overall, ethics in the use of ChatGPT requires careful consideration of the data used to train the model, responsible use of the model to prevent harmful content, and safeguarding user privacy and security (1818 Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5:e105-e106.).

Researchers using AI tools should document this use in the methods or acknowledgements sections of the created manuscript, and AI tools cannot take any attribution of authorship, since it carries with responsibility and accountability for the work (1919 Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613:620-1., 2020[No Authors]. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613:612.).

AI is in its infancy, is not free of limitations and biases, and there is enough potential to be developed that will certainly change our practices and human life. As any impactful tool, it can be used for the good and for the bad, and its responsible and moderated use is key.

ChatGPT limitations

Researchers are actively working to improve the capabilities and to address limitations that are common to most language models and AI systems in general.

While it is continually being updated with new data and fine-tuned by researchers and developers to improve its performance, the specific level of updating and the frequency of updates can vary depending on the resources and priorities of the team or organization responsible for maintaining ChatGPT. The current version was trained on a dataset that had a knowledge cutoff date of September 2021.

In addition to biases based on the sources of the data or the people who wrote it or programmers’ orientation, algorithmic bias may create systematic and repeatable errors with “unfair” outcomes in ways different from the intended algorithm.

Artificial intelligence hallucination may generate plausible sounding but incorrect or nonsensical answers that does not seem to be justified by its training data, such as claim to be human.

Limited access to the human commonsense knowledge, and inability to reason and learn and create like humans may generate struggle to understand the full context of a conversation or a piece of text, providing illogical, out of context, not relevant or not accurate responses (2121 Obermeyer Z, Topol EJ. Artificial intelligence, bias, and patients’ perspectives. Lancet. 2021;397:2038.).

REFERENCES

  • 1
    GPT-4 is OpenAI's most advanced system, producing safer and more useful OpenAI. [Internet]. Avaliable at. <https://openai.com>. accessed on 06 mar 2023
    » https://openai.com>
  • 2
    Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, et al. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020;33:1877-1901.
  • 3
    Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. 2018;25:1248-58.
  • 4
    Xu L, Sanders L, Li K, Chow JCL. Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic Review. JMIR Cancer. 2021;7:e27850.
  • 5
    Yang LWY, Ng WY, Lei X, Tan SCY, Wang Z, Yan M, et al. Development and testing of a multi-lingual Natural Language Processing-based deep learning system in 10 languages for COVID-19 pandemic crisis: A multi-center study. Front Public Health. 2023;11:1063466.
  • 6
    Almalki M, Azeez F. Health Chatbots for Fighting COVID-19: a Scoping Review. Acta Inform Med. 2020;28:241-7.
  • 7
    Coiera E, Liu S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in health-care. Cell Rep Med. 2022;3:100860.
  • 8
    van Buchem MM, Boosman H, Bauer MP, Kant IMJ, Cammel SA, Steyerberg EW. The digital scribe in clinical practice: a scoping review and research agenda. NPJ Digit Med. 2021;4:57.
  • 9
    Perlis N, Finelli A, Lovas M, Berlin A, Papadakos J, Ghai S, et al. Creating patient-centered radiology reports to empower patients undergoing prostate magnetic resonance imaging. Can Urol Assoc J. 2021;15:108-13.
  • 10
    Hasan MM, Islam MU, Sadeq MJ, Fung WK, Uddin J. Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment. Sensors (Basel). 2023;23:527.
  • 11
    Chen J, Remulla D, Nguyen JH, Dua A, Liu Y, Dasgupta P, et al. Current status of artificial intelligence applications in urology and their potential to influence clinical practice. BJU Int. 2019;124:567-77. Erratum in: BJU Int. 2020;126:647.
  • 12
    Eun SJ, Kim J, Kim KH. Applications of artificial intelligence in urological setting: a hopeful path to improved care. J Exerc Rehabil. 2021;17:308-12.
  • 13
    Chu TN, Wong EY, Ma R, Yang CH, Dalieh IS, Hung AJ. Exploring the Use of Artificial Intelligence in the Management of Prostate Cancer. Curr Urol Rep. 2023; Epub ahead of print.
  • 14
    Thenault R, Kaulanjan K, Darde T, Rioux-Leclercq N, Bensalah K, Mermier M, et al. The Application of Artificial Intelligence in Prostate Cancer Management—What Improvements Can Be Expected? A Systematic Review. Applied Sciences. 2020;10:6428.
  • 15
    Gabrielson AT, Odisho AY, Canes D. Harnessing Generative Artificial Intelligence to Improve Efficiency Among Urologists: Welcome ChatGPT. J Urol. 2023; Epub ahead of print.
  • 16
    Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: A mapping review. Soc Sci Med. 2020;260:113172.
  • 17
    Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1:389-99. [Internet]. Available at. <https://www.nature.com/articles/s42256-019-0088-2>.
    » https://www.nature.com/articles/s42256-019-0088-2>
  • 18
    Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5:e105-e106.
  • 19
    Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613:620-1.
  • 20
    [No Authors]. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613:612.
  • 21
    Obermeyer Z, Topol EJ. Artificial intelligence, bias, and patients’ perspectives. Lancet. 2021;397:2038.

Publication Dates

  • Publication in this collection
    01 Sept 2023
  • Date of issue
    Sep-Oct 2023

History

  • Received
    09 Apr 2023
  • Accepted
    11 Apr 2023
  • Published
    20 June 2023
Sociedade Brasileira de Urologia Rua Bambina, 153, 22251-050 Rio de Janeiro RJ Brazil, Tel. +55 21 2539-6787, Fax: +55 21 2246-4088 - Rio de Janeiro - RJ - Brazil
E-mail: brazjurol@brazjurol.com.br