Innovative solutions driving your business forward.
Discover our insights & resources.
Explore your career opportunities.
Learn more about Sogeti.
Start typing keywords to search the site. Press enter to submit.
Generative AI
Cloud
Testing
Artificial intelligence
Security
April 04, 2024
Generative AI is reshaping business operations and customer engagement with its autonomous capabilities. However, to quote Uncle Ben from Spiderman: “With great power comes great responsibility.”
Managing generative AI has been challenging as generative AI models are outperforming humans in some areas, such as profiling for national security causes. Sometimes, anti-principles clearly explain why ethics must be enforced, so it is important to understand the following challenges:
Although complex, these challenges can be alleviated on a technical level. Monitoring is a good example of ensuring robustness and observability of the behavior of these models. Additionally, since generative AI capability is exposing businesses to new risks, there is a need for well-thought-through governance, guardrails, and the following methods:
It is crucial that generative AI design takes care of the following aspects of ethical AI:
Försäkringskassan, the Swedish authority responsible for social insurance benefits, faced a challenge in handling vast amounts of data containing personally identifiable information (PII), including medical records and symptoms, while adhering to GDPR regulations. It needed a way to test applications and systems with relevant data without compromising client privacy. Collaborating with Försäkringskassan, Sogeti delivered a scalable generative AI microservice, using generative adversarial network (GAN) models to alleviate this risk.
This solution involved feeding real data samples into the GAN model, which learned the data’s characteristics. The output was synthetic data closely mirroring the original dataset in statistical similarity and distribution, while not containing any PII. This allowed the data to be used for training AI models, text classification, chatbot Q&A, and document generation.
The implementation of this synthetic data solution marked a significant achievement. It provided Försäkringskassan with realistic and useful data for software testing and AI model improvement, ensuring compliance with legal requirements. Moreover, this innovation allowed for efficient scaling of data, benefiting model development and testing.
Försäkringskassan’s commitment to protecting personal data and embracing innovative technologies not only ensured regulatory compliance but also propelled it to the forefront of digital solutions in Sweden. Through this initiative, Försäkringskassan contributed significantly to the realization of the Social Insurance Agency’s vision of a society where individuals can feel secure even when life takes unexpected turns.
The market for trustworthy generative AI is flourishing, driven by these key trends.
Ethical considerations are at the heart of these groundbreaking achievements. The responsible use of generative AI ensures that while we delve into the boundless possibilities of artificial intelligence, we do so with respect for privacy and security. Ethical generative AI, exemplified by Försäkringskassan’s initiative, paves the way for a future where innovation and integrity coexist in harmony.
“ETHICAL GENERATIVE AI IS THE ART OF NURTURING MACHINES TO MIRROR NOT ONLY OUR INTELLECT BUT THE VERY ESSENCE OF OUR NOBLEST INTENTIONS AND TIMELESS VALUES.”
TRANSPARENCY AND ACCOUNTABILITY
Generative AI systems should be designed with transparency in mind. Developers and organizations should be open about the technology’s capabilities, limitations, and potential biases. Clear documentation and disclosure of the data sources, training methods, and algorithms used are essential.
BIAS MITIGATION
Generative AI models often inherit biases present in their training data. It’s crucial to actively work on identifying and mitigating these biases to ensure that AI-generated content does not perpetuate or amplify harmful stereotypes or discrimination.
USER CONSENT AND CONTROL
Users should have the ability to control and consent to the use of generative AI in their interactions. This includes clear opt-in/opt-out mechanisms. Respect for user preferences and privacy and data protection principles should also be upheld.
AI Specialist | Trusted AI, Sogeti Netherlands