X

AI Assistants Need to Know a Lot About You to Work Best. Is That OK?

To get the best of new generative AI assistants, you'll need to contend with these chatbots learning more about you.

Lisa Eadicicco Senior Editor
Lisa Eadicicco is a senior editor for CNET covering mobile devices. She has been writing about technology for almost a decade. Prior to joining CNET, Lisa served as a senior tech correspondent at Insider covering Apple and the broader consumer tech industry. She was also previously a tech columnist for Time Magazine and got her start as a staff writer for Laptop Mag and Tom's Guide.
Expertise Apple | Samsung | Google | Smartphones | Smartwatches | Wearables | Fitness trackers
Lisa Eadicicco
8 min read
Man using an AI chatbot on his phone.

The new wave of virtual helpers can provide more contextual and conversational answers by combing different types of personal data. But are we comfortable with that?

Tippapatt/Getty Images

Digital assistants have existed for years; after all, Siri debuted on the iPhone 4S in 2011. But early voice-enabled helpers were initially limited in their functionality, sometimes struggling to produce helpful answers even under ideal circumstances. The new wave of digital agents that began to crop up in late 2022 and 2023, however, can effortlessly do everything from creating recipes to summing up your emails to even writing social media captions for your photos. 

Virtual helpers took a leap forward this year thanks to the rise of generative AI, or AI that can create content based on prompts after being trained on data. OpenAI dazzled the world with ChatGPT roughly one year ago, and tech giants like Microsoft, Google, Amazon and Meta nimbly wove generative AI into their chatbots, search engines and digital assistants throughout 2023.

But the new cohort of high-tech digital butlers also requires trust in Big Tech, a sizable ask after data breaches, controversies like the 2018 Cambridge Analytica scandal and investigations into privacy practices have shaken our faith in tech companies. The past 10 years have raised big questions from regulators and the general public about how companies use the stream of data we feed them. Reaping the benefits of new AI could mean getting even more personal with the tech services we use every day.

In some ways, chatbots like OpenAI's ChatGPT, Microsoft's Copilot and Google Bard are just  evolutions of how digital services already operate. Companies like Google parent Alphabet, Meta and Amazon already crunch data about our internet browsing habits to provide personalized content, ads and recommendations. New AI tools may not require more personal data, but it's the new ways these tools connect the dots between different types of personal data, like our emails and texts, that raise fresh privacy concerns.  

"We can see how the pieces are put together now with these tools," said Matthew Butkovic, technical director for the CERT cybersecurity division at Carnegie Mellon University. "We knew the data was out there, but now we're seeing how it's being used in combination."

The rise of new AI assistants

Microsoft Copilot on the Windows 11 desktop.

Microsoft Copilot on the Windows 11 desktop.

Microsoft

Throughout 2023, it became clear that virtual assistants are in the process of getting a major overhaul. While search engines, chatbots and image generators were the first online tools to get an AI glow up, companies like Microsoft and Google are now infusing the tech into full-fledged digital assistants. 

Microsoft Copilot, which the company detailed at its Sept. 21 event, is more sophisticated than Cortana, the PC giant's now-shuttered previous personal assistant. 

Copilot doesn't just answer questions and commands like "What will the weather be like in Spain next week?" or "What time is my next meeting?" It pulls information from your apps, the web and your devices to provide more specific and personalized responses. 

During the Sept. 21 keynote, Carmen Zlateff, vice president of Windows, showed an example of how Copilot on your Windows PC will be able to answer questions based on information found in your phone's text messages, such as details about an upcoming flight. That example underscores how Copilot does more than just retrieve answers based on the web or data stored in your Microsoft account.

Assistant with Bard

Assistant with Bard

Screenshot/CNET

It didn't take long for Google to showcase how generative AI will play a role in its own helper, the Google Assistant. During an event on Oct. 4, Google unveiled Assistant with Bard, a new version of its virtual sidekick powered by the tech behind its conversational Bard chatbot. 

Sissie Hsiao, vice president and general manager for Google Assistant and Bard, demonstrated this at the event by showing how you'll be able to recite a command like "Catch me up on any important emails I've missed this week." That's all it took for the Google Assistant to conjure up a bulleted list distilling emails like a child's birthday party invitation and a notification about a collegiate career fair down to just a couple of sentences. 

"While Assistant is great at handling quick tasks like setting timers, giving weather updates and making quick calls, there is so much more that we've always envisioned a deeply capable  personal assistant should be able to do," she said during the presentation. "But the technology to deliver it didn't exist until now."

ChatGPT

ChatGPT displayed on smart phone with OpenAI logo seen on screen in the background.

Jonathan Raa/NurPhoto via Getty Images

Generative AI is influencing almost every aspect of how we interact with the internet -- from retrieving search results to editing images. But Microsoft and Google's announcements represent a radical shift in how these companies are thinking about AI helpers. It goes a step beyond making these virtual assistants better listeners and conversationalists, as Amazon did with the upgraded Alexa it unveiled in September. 

Microsoft and Google may be the biggest proponents of using generative AI to create smarter assistants, but they're not the only ones. OpenAI, which kicked off the generative AI craze with ChatGPT last year, recently announced that users will be able to create custom versions of ChatGPT for specific tasks -- like explaining board game rules and providing tech advice. That potentially opens the opportunity for anyone to create their own specialized digital helper, which OpenAI is calling GPTs. All you need to do is provide instructions, decide what you want your GPT to do, and of course, feed it some data. 

Trusting AI to use our data the right way

metaverse-hallway-ai-2

A futuristic hallway with logos for Google's Bard, ChatGPT and Bing amid a fierce AI chatbot race.

James Martin/CNET

Generative AI could signal a turning point for virtual assistants, providing them with the contextual awareness and conversational comprehension they've lacked.

But doing so also means giving these digital helpers a bigger window into our personal and professional lives. It requires trust in these AI systems to combine and crunch our emails, files, apps and texts in a way that feels helpful rather than disruptive or unsettling.

Carnegie Mellon's Butkovic provides a hypothetical example of how working with a generative AI assistant could potentially go awry. Let's say you ask an AI helper to compile a report about a specific work-related topic. An AI helper could accidentally weave sensitive client data into its report if that data isn't properly classified.

"We may see potentially a new source of risk in combinations of information we didn't anticipate," he said. "Because we didn't have the tools before, and we haven't put safeguards in place to prevent it."

It's not just about sensitive data. There are moments in our lives we might not want to be reminded of when asking a digital assistant to craft a report or draft an email. How can we trust that it won't?

Jen King, privacy and data fellow at the Stanford Institute for Human-Centered Artificial Intelligence, cites another hypothetical example. If you have lengthy email conversations with family members sitting in your inbox discussing arrangements for a deceased loved one, you probably wouldn't want those communications pulled into certain answers or reports. 

02-google-photos-update-september-2019-slideshow

The Google Photos app

Sarah Tew/CNET

There's already a precedent for this happening in social media and photo gallery apps. After receiving feedback, Facebook added more controls in 2019 to make it easier for those taking over an account for a deceased loved one. The company uses AI to stop profiles of deceased friends or family members from surfacing in birthday notifications and event invite recommendations, but acknowledged at the time that there was room for improvement. 

"We're working to get better and faster at this," the post read.  

Google also added more controls for managing which curated memories appear in its Photos app after Wired highlighted how painful looking back on certain photos stored on our phones can be and the lack of tools for managing them at the time. 

"The more data I can feed [it] about you, the more I'm going to be able to fill in the blanks," King said. "And if you're looking across like multiple facets of someone's life, that's a really risky strategy, because so much of our lives is not static."

That's in addition to existing challenges generative AI-based apps and chatbots already face, such as the accuracy of the information they deliver and the potential for hackers to trick these systems. Tech companies are aware of these hurdles and are trying to address them. 

Google nudges you to double-check Bard's responses and says the tool may provide incorrect information, for example. It also notes that Bard can only access personal data from Workspace, Google's suite of productivity software that includes Gmail, with the user's permission. It also doesn't use that content to show ads or improve Bard. Google's blog post about Assistant with Bard also mentions that users will be able to customize privacy settings.

ChatGPT similarly encourages users to fact check answers and discloses that responses may not always be accurate. It also warns users not to input sensitive information into the tool.

The frequently asked questions page for Microsoft Copilot notes that responses aren't guaranteed to be right and says Copilot inherits the same security standards enterprise products, like Microsoft 365, are built on. It also says customer data isn't used to train its large language models.

But Irina Raicu, the Internet ethics program director at Santa Clara University's Markkula Center for Applied Ethics, is concerned about new privacy vulnerabilities that are specific to generated AI that haven't been resolved yet. One such example is prompt injection, an exploit that lets attackers take advantage of large language learning models by hiding malicious instructions.

Unlocked padlock and fencing on a phone screen, with a computer keyboard in the background

Some privacy experts that the rise of generative AI could result in new types of threats. 

Angela Lang/CNET

In a blog post from August, the UK's National Cyber Security Centre described an example of what a potential prompt injection attack could look like. A hacker could theoretically hide malicious code in a transaction request sent to a user via a banking app. If the person asks the bank's chatbot about its spending habits for the month, the large language model could end up analyzing that code from the attacker while looking through the person's transactions to answer the question. That, in turn, could trigger money to be sent to the attacker's account.

Raicu is concerned that cybersecurity isn't keeping up with these new threats. She points to the rise of ransomware attacks in recent years as an example of what can happen when  cybersecurity solutions don't evolve quickly enough. 

"Imagine that, but with a new layer of challenges that we don't understand yet," she said.  

However, there are some reasons to be hopeful that the AI boom won't result in the privacy mishaps that followed the proliferation of new tech platforms in the past. There's already a push for AI regulation in the White House and the EU, for example. Tech companies are generally under more scrutiny when it comes to privacy, security, and their size and influence than they were when platforms like Facebook, Instagram and Google Search emerged.

But in the end, we're still going to have to contend with the potential risks and trade-offs that come with the benefits of new technologies.

"There's going to be no absolutes in this," Butkovic said. "We're going to live in this gray space where you need to make personal decisions about your comfort with these sorts of systems culling through the artifacts of your digital life."

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.