Privacy Implications of Using Consumer AIs for Businesses
Privacy Implications of Using Consumer AIs for Businesses
Privacy Implications of Using Consumer AIs for Businesses
7 de ago. de 2023
Cognitive AI tools such as ChatGPT and Google Bard are proven to boost productivity for business tasks. Asking AIs for solutions to cognitive problems is becoming a habit for many knowledge workers. Whether corporations are aware of it or not.
We're in a discovery period where team members are mixing their personal and professional usage of these tools. As they become more entrenched in every team's workflow, business leaders start to scrutinize the privacy and security implications.
In some cases, businesses such as Apple and Samsung have opted to prohibit the use of cognitive AI entirely due to these concerns. Yet, as we navigate this new landscape, understanding the implications and managing them effectively is key. This article will guide you through the current landscape of these tools and how Aivia.ai can assist you in navigating it.
ChatGPT
Currently, there are two versions of ChatGPT: Free and Plus. In both versions, by default, the data users input can be used to train new models.
Users can manually disable this data collection. This automatically deactivates several features, such as the conversation history, code interpreter, and plugins.
There are many other data interfaces created when using ChatGPT. For example, Plugins create interfaces with countless other 3rd parties in different places.
Google Bard
Similarly to ChatGPT, Google Bard allows you to disable data collection for the purpose of training new models at your Google Account in a section called Bard Activity:
Going even further than ChatGPT, Google Bard explicitly states your private information may be read by human reviewers, adding another privacy and security exposure layer.
Why is it so bad that ChatGPT and Google Bard train on my data?
Many users have become accustomed to using free products in exchange for their data. This normally is not an issue for most consumers and low-risk applications.
Also, just because your data can be used for training doesn't necessarily mean it will be. Data sets can be "cleaned" of personally identifiable and confidential information before training. However, this is a risk assessment that businesses must undertake. Here are a few scenarios that may warrant extra consideration:
You deal with your customer's data
If your business deals with your customer's data, it is important that this data is not exposed to third parties without their consent.
Your have proprietary data or business secrets
If your business has trade secrets or information you wouldn't want to be shared with competitors, such as marketing conversion rates or secret formulas.
You are in a regulated or privacy-sensitive industry
If you deal with health, financial, or other sensitive personal data, you need to assess the risk to you if this data is exposed in a new future AI model.
You need to own the product generated by cognitive AI interactions
If your business produces intellectual property, it is important to be able to monitor the output generated by the collaboration between your team and AI.
How does Aivia.ai solve this?
When accessing AI systems via Aivia.ai, you will be using the Business API.
Aivia.ai has partnerships with OpenAI and Google, and pays these providers for each request you make. Here are their Business API Terms of Services to further clarify how your data is handled:
https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance
"OpenAI does not use data submitted to and generated by our API to train OpenAI models or improve OpenAI’s service offering."
https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance
"Google processes Customer Data to provide the service, and does not use Customer Data to train its models without the express consent of its customers."
Conclusions
Privacy and security in AI must be addressed heads on by businesses. Avoiding the discussion would mean team members using consumer AI tools and possibly mixing personal and professional data.
The next generation of AI tools is bringing privacy and data controls built-in. The investment in such tools will not only be measured in the context of the extra productivity added but also the level of privacy and security they will offer.
Cognitive AI tools such as ChatGPT and Google Bard are proven to boost productivity for business tasks. Asking AIs for solutions to cognitive problems is becoming a habit for many knowledge workers. Whether corporations are aware of it or not.
We're in a discovery period where team members are mixing their personal and professional usage of these tools. As they become more entrenched in every team's workflow, business leaders start to scrutinize the privacy and security implications.
In some cases, businesses such as Apple and Samsung have opted to prohibit the use of cognitive AI entirely due to these concerns. Yet, as we navigate this new landscape, understanding the implications and managing them effectively is key. This article will guide you through the current landscape of these tools and how Aivia.ai can assist you in navigating it.
ChatGPT
Currently, there are two versions of ChatGPT: Free and Plus. In both versions, by default, the data users input can be used to train new models.
Users can manually disable this data collection. This automatically deactivates several features, such as the conversation history, code interpreter, and plugins.
There are many other data interfaces created when using ChatGPT. For example, Plugins create interfaces with countless other 3rd parties in different places.
Google Bard
Similarly to ChatGPT, Google Bard allows you to disable data collection for the purpose of training new models at your Google Account in a section called Bard Activity:
Going even further than ChatGPT, Google Bard explicitly states your private information may be read by human reviewers, adding another privacy and security exposure layer.
Why is it so bad that ChatGPT and Google Bard train on my data?
Many users have become accustomed to using free products in exchange for their data. This normally is not an issue for most consumers and low-risk applications.
Also, just because your data can be used for training doesn't necessarily mean it will be. Data sets can be "cleaned" of personally identifiable and confidential information before training. However, this is a risk assessment that businesses must undertake. Here are a few scenarios that may warrant extra consideration:
You deal with your customer's data
If your business deals with your customer's data, it is important that this data is not exposed to third parties without their consent.
Your have proprietary data or business secrets
If your business has trade secrets or information you wouldn't want to be shared with competitors, such as marketing conversion rates or secret formulas.
You are in a regulated or privacy-sensitive industry
If you deal with health, financial, or other sensitive personal data, you need to assess the risk to you if this data is exposed in a new future AI model.
You need to own the product generated by cognitive AI interactions
If your business produces intellectual property, it is important to be able to monitor the output generated by the collaboration between your team and AI.
How does Aivia.ai solve this?
When accessing AI systems via Aivia.ai, you will be using the Business API.
Aivia.ai has partnerships with OpenAI and Google, and pays these providers for each request you make. Here are their Business API Terms of Services to further clarify how your data is handled:
https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance
"OpenAI does not use data submitted to and generated by our API to train OpenAI models or improve OpenAI’s service offering."
https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance
"Google processes Customer Data to provide the service, and does not use Customer Data to train its models without the express consent of its customers."
Conclusions
Privacy and security in AI must be addressed heads on by businesses. Avoiding the discussion would mean team members using consumer AI tools and possibly mixing personal and professional data.
The next generation of AI tools is bringing privacy and data controls built-in. The investment in such tools will not only be measured in the context of the extra productivity added but also the level of privacy and security they will offer.
Cognitive AI tools such as ChatGPT and Google Bard are proven to boost productivity for business tasks. Asking AIs for solutions to cognitive problems is becoming a habit for many knowledge workers. Whether corporations are aware of it or not.
We're in a discovery period where team members are mixing their personal and professional usage of these tools. As they become more entrenched in every team's workflow, business leaders start to scrutinize the privacy and security implications.
In some cases, businesses such as Apple and Samsung have opted to prohibit the use of cognitive AI entirely due to these concerns. Yet, as we navigate this new landscape, understanding the implications and managing them effectively is key. This article will guide you through the current landscape of these tools and how Aivia.ai can assist you in navigating it.
ChatGPT
Currently, there are two versions of ChatGPT: Free and Plus. In both versions, by default, the data users input can be used to train new models.
Users can manually disable this data collection. This automatically deactivates several features, such as the conversation history, code interpreter, and plugins.
There are many other data interfaces created when using ChatGPT. For example, Plugins create interfaces with countless other 3rd parties in different places.
Google Bard
Similarly to ChatGPT, Google Bard allows you to disable data collection for the purpose of training new models at your Google Account in a section called Bard Activity:
Going even further than ChatGPT, Google Bard explicitly states your private information may be read by human reviewers, adding another privacy and security exposure layer.
Why is it so bad that ChatGPT and Google Bard train on my data?
Many users have become accustomed to using free products in exchange for their data. This normally is not an issue for most consumers and low-risk applications.
Also, just because your data can be used for training doesn't necessarily mean it will be. Data sets can be "cleaned" of personally identifiable and confidential information before training. However, this is a risk assessment that businesses must undertake. Here are a few scenarios that may warrant extra consideration:
You deal with your customer's data
If your business deals with your customer's data, it is important that this data is not exposed to third parties without their consent.
Your have proprietary data or business secrets
If your business has trade secrets or information you wouldn't want to be shared with competitors, such as marketing conversion rates or secret formulas.
You are in a regulated or privacy-sensitive industry
If you deal with health, financial, or other sensitive personal data, you need to assess the risk to you if this data is exposed in a new future AI model.
You need to own the product generated by cognitive AI interactions
If your business produces intellectual property, it is important to be able to monitor the output generated by the collaboration between your team and AI.
How does Aivia.ai solve this?
When accessing AI systems via Aivia.ai, you will be using the Business API.
Aivia.ai has partnerships with OpenAI and Google, and pays these providers for each request you make. Here are their Business API Terms of Services to further clarify how your data is handled:
https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance
"OpenAI does not use data submitted to and generated by our API to train OpenAI models or improve OpenAI’s service offering."
https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance
"Google processes Customer Data to provide the service, and does not use Customer Data to train its models without the express consent of its customers."
Conclusions
Privacy and security in AI must be addressed heads on by businesses. Avoiding the discussion would mean team members using consumer AI tools and possibly mixing personal and professional data.
The next generation of AI tools is bringing privacy and data controls built-in. The investment in such tools will not only be measured in the context of the extra productivity added but also the level of privacy and security they will offer.
Try 14 Days for Free
Start