AI under Scrutiny: Understanding the Risks of ChatGPT, DALL-E, and Co.

Advanced AI tools such as Chat-GPT and DALL-E are revolutionizing the way we interact with machines.

ChatGPT is a language model that can understand and generate human-like responses, while DALL-E is an image-generating AI that can create images from textual descriptions.

In this blog post, we explore some of the concerns surrounding the use of AI tools.


While these AI models can generate coherent paragraphs of text or images that appear realistic, we have to consider how factual their outputs are. It is difficult for a person interacting with these tools to verify the truthfulness and accuracy of the generated content. It is therefore important for users to consult multiple sources and not just rely on artificial intelligence.

While Wikipedia – the biggest online encyclopedia – places a great emphasis on utilizing credible sources to ensure accuracy and reliability, ChatGPT does not provide sources for its answers. In a recent interview, the founder of Wikipedia Jimmy Wales reflects on the “tendency [of ChatGPT] to just make stuff up out of thin air, which […] is not OK”.

If you identify false information in a ChatGPT response, you may provide feedback to OpenAI (the creator) by using the “thumbs down” icon 👎in the chat.

More work is needed to improve the credibility and accuracy of AI models. Without safeguards to ensure the reliability of AI-generated content, these tools could spread misinformation.


It is commonly known that AI tools are only as good as the data they have been trained on. If the data is biased, the result will also be biased. If a user asks a question that contains private data, such as their name or address, the AI tool may store this information and use it to train its model.

To be fair, ChatGPT warns against this. However, I have seen some posts on social media of people sharing their entire CVs with ChatGPT in order to get its feedback. This is not a good idea because this private information could be made public. The conversational aspect of ChatGPT may seem trustworthy, but it is worth considering if we should entrust the AI with all our secrets. People either choose to ignore the implications of the disclosure of their private information or are simply not aware of them.

To further mitigate this risk, privacy protection mechanisms such as anonymization and encryption should be implemented to ensure that users’ private data is not stored, reproduced, or used for purposes other than answering their queries.

The Italian Data Protection Authority has recently banned ChatGPT, citing concerns regarding data breaches and the use of personal data to train the chatbot as the primary reasons. Legislation like the General Data Protection Regulation (GDPR) provides a framework for managing these risks, and it remains to be seen how AI compliance with GDPR will play out in the future.


There is an ongoing debate about who owns the intellectual property rights to content generated by AI. According to OpenAI’s terms of use (last checked on 16.04.2023) it assigns to users “all its right, title and interest in and to Output” of its AI tools. However, it also states that “Due to the nature of machine learning, Output may not be unique across users and the Services may generate the same or similar output for OpenAI or a third party. […] Other users may also ask similar questions and receive the same response. Responses that are requested by and generated for other users are not considered your Content.”.

Additionally, copyrighted works are often used to train AI models, which raises questions about whether AI-generated content is derivative or original. Some lawsuits have already raised the issue of AI violating copyright laws. The question of who can claim ownership of works produced by AI systems is closely related to the question about the copyright over training data and therefore also unsettled.

Clarifying the fair use of data to develop AI that does not infringe on the copyrights of human creators and determining who has the rights to AI-generated content are complex problems. It remains to be seen if regulations in their current form will be sufficient to solve these issues.

In a Nutshell

Chat-GPT, Dall-E, and other AI models point to an exciting future with new creative possibilities, but we must address the risks around trust, privacy, and copyright. More needs to be done to ensure that AI develops in a way that is transparent, accountable, and aligned with human values and priorities. By proactively managing the risks, we could promote the development of ethical AI.

Photo: MS.

Leave a Reply

Your email address will not be published. Required fields are marked *