Can GPT-4 Chat Pass a Polish Stockbroker Exam?
pdf

Keywords

finance
the law
stocks
Artificial Intelligence (AI)

Categories

How to Cite

Wyłuda, T. (2024) “Can GPT-4 Chat Pass a Polish Stockbroker Exam?”, Scientific Journal of Bielsko-Biala School of Finance and Law. Bielsko-Biała, PL, 28(1), pp. 75–80. doi: 10.19192/wsfip.sj1.2024.10.

Abstract

This research investigates the performance of OpenAI's GPT-4, a sophisticated language model, in passing the Polish Stockbroker Exam conducted by the Polish Financial Supervisory Authority (KNF). The exam, covering a broad range of topics, including legal issues, finance theory, finance mathematics, and setting prices, requires theoretical and practical skills pertinent to the financial markets. The study is set against various evaluations where GPT-4 and its predecessors have been tested in numerous academic and professional settings, demonstrating strengths and weaknesses in different domains. The study aimed to determine whether GPT-4 can pass the Polish Stockbroker Exam and analyze its performance across different question types. Results indicated that GPT-4 consistently failed to meet the passing score. However, it performed better when given more time per question, suggesting a trade-off between accuracy and completeness. Analysis by question type revealed higher proficiency in legal and finance theoretical questions but significant struggles with specific questions related to the stockbroker job. Notably, GPT-4 showed improvement in finance calculation questions with more response time.

https://doi.org/10.19192/wsfip.sj1.2024.10
pdf

References

Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., ... & McGrew, B. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774.

Balona, C. (2023). ActuaryGPT: Applications of large language models to insurance and actuarial work. Available at SSRN 4543652.

Bashynska, I., Prokopenko, O., & Sala, D. (2023). Managing Human Capital with AI: Synergy of Talent and Technology. Zeszyty Naukowe Wyższej Szkoły Finansów i Prawa w Bielsku-Białej, 27(3), 39-45.

Beerbaum, D. O. (2023). Generative Artificial Intelligence (GAI) with Chat GPT for Accounting–a business case. Available at SSRN 4385651.

Blair-Stanek, A., Carstens, A. M., Goldberg, D. S., Graber, M., Gray, D. C., & Stearns, M. L. (2023). GPT-4’s Law School Grades: Con Law C, Crim C-, Law & Econ C, Partnership Tax B, Property B-, Tax B. Crim C-, Law & Econ C, Partnership Tax B, Property B-, Tax B (May 9, 2023).

Callanan, E., Mbakwe, A., Papadimitriou, A., Pei, Y., Sibue, M., Zhu, X., ... & Shah, S. (2023). Can gpt models be financial analysts? an evaluation of chatgpt and gpt-4 on mock cfa exams. arXiv preprint arXiv:2310.08678.

Eulerich, M., Sanatizadeh, A., Vakilzadeh, H., & Wood, D. A. (2023). Is it All Hype? ChatGPT’s Performance and Disruptive Potential in the Accounting and Auditing Industries. SSRN Electronic Journal.

Fares, O. H., Butt, I., & Lee, S. H. M. (2023). Utilization of artificial intelligence in the banking sector: A systematic literature review. Journal of Financial Services Marketing, 28(4), 835-852.

Farhat, F., Chaudry, B. M., Nadeem, M., Sohail, S. S., & Madsen, D. O. (2023). Evaluating AI models for the National Pre-Medical Exam in India: a head-to-head analysis of ChatGPT-3.5, GPT-4 and Bard. JMIR Preprints.

Geerling, W., Mateer, G. D., Wooten, J., & Damodaran, N. (2023). ChatGPT has aced the test of understanding in college economics: Now what?. The American Economist, 05694345231169654.

Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2023). How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Medical Education, 9(1), e45312.

Jang, D., Yun, T. R., Lee, C. Y., Kwon, Y. K., & Kim, C. E. (2023). GPT-4 can pass the Korean National Licensing Examination for Korean Medicine Doctors. PLOS Digital Health, 2(12), e0000416.

Jung, L. B., Gudera, J. A., Wiegand, T. L., Allmendinger, S., Dimitriadis, K., & Koerte, I. K. (2023). ChatGPT passes German state examination in medicine with picture questions omitted. Deutsches Ärzteblatt International, 120(21-22), 373.

Karmańska, A. (2022). Artificial Intelligence in audit. Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu, 66(4), 87-99.

Kilic, M. E. (2023). AI in Medical Education: A Comparative Analysis of GPT-4 and GPT-3.5 on Turkish Medical Specialization Exam Performance. medRxiv, 2023-07.

Loubier, M. (2023). ChatGPT: A Good Computer Engineering Student?: An Experiment on its Ability to Answer Programming Questions from Exams.

Malladi, R. K. (2023). Emerging Frontiers: Exploring the Impact of Generative AI Platforms on University Quantitative Finance Examinations. arXiv preprint arXiv:2308.07979.

Martínez, E. (2023). Re-Evaluating GPT-4's Bar Exam Performance. Available at SSRN 4441311.

Nametala, C. A., Souza, J. V. D., Pimenta, A., & Carrano, E. G. (2023). Use of econometric predictors and artificial neural networks for the construction of stock market investment bots. Computational Economics, 61(2), 743-773.

Pursnani, V., Sermet, Y., Kurt, M., & Demir, I. (2023). Performance of ChatGPT on the US fundamentals of engineering exam: Comprehensive assessment of proficiency and potential implications for professional environmental engineering practice. Computers and Education: Artificial Intelligence, 5, 100183.

Rosoł, M., Gąsior, J. S., Łaba, J., Korzeniewski, K., & Młyńczak, M. (2023). Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination. Scientific Reports, 13(1), 20512.

Takagi, S., Watari, T., Erabi, A., & Sakaguchi, K. (2023). Performance of GPT-3.5 and GPT-4 on the Japanese medical licensing examination: comparison study. JMIR Medical Education, 9(1), e48002.

Terwiesch, C. (2023). Would chat GPT3 get a Wharton MBA. A prediction based on its performance in the operations management course.

Wang, X., Hu, Z., Lu, P., Zhu, Y., Zhang, J., Subramaniam, S., ... & Wang, W. (2023). Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635.

Yeadon, W., Inyang, O. O., Mizouri, A., Peach, A., & Testrow, C. P. (2023). The death of the short-form physics essay in the coming AI revolution. Physics Education, 58(3), 035027.

Yeadon, W., and Douglas P. Halliday. "Exploring durham university physics exams with large language models." arXiv preprint arXiv:2306.15609 (2023).

Polish Financial Supervision Authority (n.d.). Examinations for Securities Brokers. Available at: https://www.knf.gov.pl/dla_rynku/egzaminy/Maklerzy_papierow_wartosciowych_egzaminy/testy (Accessed: January 28, 2024).

KNF, Examination Commission for Securities Brokers (2023). Communication No. 4 regarding the thematic scope of the exam for securities brokers and the skills test [Announcement No. 4 on the subject scope of the securities broker exam and skills test]. Available at: https://www.knf.gov.pl/knf/pl/komponenty/img/Komunikat_4_2023_87292.pdf (Accessed: January 25, 2024).

Minister of Finance Regulation on examinations for securities brokers and investment advisors and the skills test (2016) Journal of Laws (Dziennik Ustaw), 707, Poz. 707.

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Copyright (c) 2024 Tomasz Wyłuda

Downloads

Download data is not yet available.