Cyber Security Risks of GenAI

7a9X...vgEV
15 Jan 2024
81


Cyber Risks That May Occur Through Generative Al Systems

 
As the use of ChatGPT and similar technologies becomes widespread, cyber security risks increase. The answers we receive while using these platforms are limited to the timeliness and accuracy of the data taught to them. They are also subject to the answering rules defined on them. Since the information obtained from ChatGPT, derivative platforms is user-generated, the margin of error is quite high. Although the information should be confirmed before use, it should never be ignored that the data shared with these platforms can be accessed by third parties. The risks that may occur through Generative Al services are shared below.
 

 1. Human Risks

 
While employees generally have good intentions to use Generative Al platforms in a way that is beneficial to their work, responsible and respectful of workplace expectations, they may not naturally know what is or is not acceptable for those platforms.


New cyber risks arising from the rapid development of technology may not be clearly understood by employees. The widespread adoption of these platforms by non-native English-speaking cybercriminals, taking attacks one step further by reducing typos and spelling mistakes, and their easy and often free use further increase their potential risks. Employees may not realise that such new technologies are essentially a third-party website to which normal cyber security rules apply. For example, an employee may unknowingly upload customer-specific financial information to ChatGPT or a similar platform and ask this technology to generate a report for them. In this case, there will be an outflow of corporate data outside the organisation and data will be output to external servers. On these servers, the data may be protected with inadequate security measures or in a way that is not in compliance with the legal obligations of the organisation.

2. Incorrect Information Sharing

 
Not all information generated is empirically robust or accurate and can lead to problems with the authenticity and reliability of the information provided. Generative AI fallacies refer to situations where an AI model generates false or meaningless information while presenting it as factual. These fallacies arise, especially in natural language processing models such as ChatGPT, due to the fact that AI models are trained to predict the sequence of words that best fits a given input query. However, they lack the reasoning capabilities to evaluate logical inconsistencies in the output produced. As a result, Generative Al models can sometimes produce output that appears logical but is actually incorrect or inconsistent.
 

3. Intellectual Property Rights

 
The use of Generative Al products and applications may pose various risks in terms of intellectual property rights and some examples of these risks are given below.
The Content created with Generative Al products and applications may contain the same or indistinguishably similar content of any work protected by copyright, including text, images, music and/or software codes belonging to third parties, as well as any content subject to the virtual property rights of third parties such as trademarks, logos and/or patents and protected inventions. If these works/content of third parties are used without obtaining the necessary permissions from the right holders, plagiarism of intellectual property rights may occur and your liability may arise.
 

  • Violation of Intellectual Property Rights
  • Plagiarism
  • Unfair Competition
  • Risks Related to Content Belonging to the Company and/or Users

 
 During the use of Generative Al products and applications, many situations that are considered risky in terms of intellectual property rights may occur and the role of users is very important against these risks.

4. Social Engineering

 
Target-oriented social engineering attacks can be created by imitating people specialised in different disciplines with creative stories, convincing visuals and realistic interactions. It may be possible to send fake messages or e-mails by realistically producing human-like texts or speeches, and to carry out social engineering attacks.


n order to protect against such malware effects, it is important to keep security software up-to-date, use strong authentication methods, and fix security vulnerabilities of Generative Al systems. It is also important that users exercise caution, avoid fake or suspicious Content, and follow security best practices.
 

5. Verification

 
The onus is on humans to check that the content generated by their systems is consistent with reality. Humans should analyse the content generated by Generative Al systems and use their critical thinking skills to detect fake or manipulated content.
 

6. Ethics

 
Evaluation The use of Generative Al systems may also raise ethical issues. For example, the dissemination of fake news or the creation of manipulative content may arise. It is important for people to assess such issues and take a moral position and make appropriate decisions. As a result, the role of people against the cyber risks of Generative AI systems is crucial. People should act carefully and responsibly in matters such as verification, training, ethical evaluation and security. In addition, it is important to follow and comply with the legal regulations related to the development and use of these systems in order to prevent these risks.
 

7. Data Privacy

 
Generative Al applications and chat bots may collect and store sensitive information such as personal data, financial information or health data. This information may be accessed or stolen by unauthorised persons, which may put the privacy and security of individuals at risk.

Examples of this situation can be shared below.

  • Shared Written Visual and Audio Data;
  • Access to data by 3rd parties without the permission of the researchers;
  • Sensitive information such as personal data, financial data, health data can be collected as a result of compromising the security of ChatGPT and similar platforms;
  • Confidentiality and Privacy; Sharing customer or supplier information, violating contractual agreements and legal requirements for the protection of such information
  • Misrepresentation; Distorting someone else's data or creating data out of nothing
  • Shared Environment; Inputs and outputs are accessed by other customers or providers for development or sharing purposes
  • Integrations;
  • Misuse as an Attack Method; Preparing the ground for malicious activities by producing fake documents or fake images

 
To protect against these risks, strict data protection measures should be taken on platforms where Generative Al systems are used. Data privacy risks can be minimised by taking measures such as encrypting data, limiting access, using firewalls and monitoring tools. It is also important to train and use Generative Al systems in accordance with ethical rules and social values.
 

8. Personal Data

 
If personal data is shared with Generative AI, there may be a risk that this data may be used in AI's learning models and then shared with the relevant persons in the queries made by third parties.
 

 
9. Malware

 
Security researchers have monitored dark web forums and found examples of cybercriminals using chatbots to exchange information to "optimise" malware code. Reddit forum users have discussions about Jailbreak used by cybercriminals. For example, someone asking the chatbot for a sample of code that encrypts files could use it to accelerate ransomware projects. While the chatbot will not write a full ransomware script, the short-form material produced can still be dangerous. Cybersecurity researchers have also observed that constantly requesting new pieces of code from ChatGPT allows users to create highly evasive polymorphic malware. While this is not a new capability for cyber attackers, GhatGPT's code generation capability could allow low-skilled cybercriminals to carry out sophisticated attacks. Since the emergence of serious cybersecurity concerns, ChatGPT's parent company, OpenAI, has worked to redefine and improve the chatbot's capabilities.
 
 

10. Data Leakage

 
 According to OpenAl's privacy policy, data on individual IP addresses, browser types, settings and users' interactions with the chatbot site are collected. Other collected data may also be relevant, such as the type of content users interact with, the features they use, and the actions they take. Data may also be collected about users' browsing activity over time and across different websites.
OpenAl policies state that the company may share users' personal information with unspecified third parties to fulfil business objectives. To mitigate the data compromise that comes with chatbot use, businesses should put in place appropriate security measures. These may include implementing encryption to protect sensitive data, limiting access to Generative Al systems, and regularly monitoring requests sent to these systems for suspicious activity.
 

11. Compliance with Regulations

 
 Failure to comply with legal requirements can result in significant fines and reputational damage to the organisation. For example, if a chatbot is used to process personal health information and is not HIPAA compliant, the organisation could be subject to fines and lawsuits.

Thank you for reading my article. I share links to my other articles below.

Makale 1: How Turkey became the dark horse of the 2002 World Cup
Makale 2: Siber Güvenlikte Üretken Yapay Zeka (Generative AI) ve Beraberinde Getirdiği Riskler
Makale 3: Eski Dost ELOIN 2024'te Şahlanacak mı?
Makale 4: BOBA FINANCE IDO REHBERİ


Write & Read to Earn with BULB

Learn More

Enjoy this blog? Subscribe to MyCryptX

3 Comments

B
No comments yet.
Most relevant comments are displayed, so some may have been filtered out.