Types of Gender Bias in Answers of Large Language Models: Case of ChatGPT-4
dc.contributor.advisor | Oksamytna, Svitlana | en_US |
dc.contributor.author | Kholodylo, Olesia | en_US |
dc.date.accessioned | 2024-07-10T07:10:49Z | |
dc.date.available | 2024-07-10T07:10:49Z | |
dc.date.issued | 2024 | |
dc.description.abstract | The study aims to reveal the intangible and tangible ways sexist prejudice is filtered into AI results. | en_US |
dc.identifier.uri | https://ekmair.ukma.edu.ua/handle/123456789/30387 | |
dc.language.iso | en_US | |
dc.status | first published | |
dc.subject | artificial intelligence | en_US |
dc.subject | natural language processing algorithms | en_US |
dc.subject | large language model (LLM) | en_US |
dc.subject | chatGPT-4 | en_US |
dc.subject | gender bias | en_US |
dc.subject | bachelor work | en_US |
dc.title | Types of Gender Bias in Answers of Large Language Models: Case of ChatGPT-4 | en_US |
dc.type | Other |