The potential for “prosperity or destruction” through the implementation of artificial intelligence (AI) was thoroughly examined during a session at ALTA ONE 2023.
Elizabeth Reilly, chief privacy officer, Fidelity National Financial, and Genady Vishnevetsky, chief information security officer, Stewart Title Guaranty Co., provided insights into the evolving landscape of AI and its potential impact on the real estate industry, be that good or bad.
Reilly took on the “good cop” side, presenting AI’s potential, with Vishnevetsky assuming the “bad cop“ role.
“Data analytics is probably the most useful case for AI today,” Reilly said. “AI can connect different systems and improve your underwriting process and improve your property searches, your examination, identify certain risks or probability of a risk for the property.”
Vishnevetsky shared in the excitement generated by AI and its potential to increase industry efficiency, but he also stressed the importance of transforming the status quo in a responsible manner.
“We’re not here to shut down the party, but to encourage to party responsibly,”“ he explained. “’We’re going to talk about some of the risks, some of the legal issues that ’we’ve seen and some of the scarier stuff you may or may not see yet. We want to show you how to use these tools the correct way and how to mitigate risk and limit the exposure.”
The evolution of AI, from early chatbots and personal assistants to the more advanced AI models like ChatGPT, were laid out by Vishnevetsky, including drastic leaps in 2022.
“AI made the headlines as of late and generated a huge buzz at the tail end of the last year,” he said. “AI has been around for decades, but most of us didn’t know it existed. Nonetheless, you are all familiar with some earlier stages of AI (chatbots on websites, personal assistants [i.e., Siri] and smart assistants [i.e., Alexa]). What made this revolutionary is that last November, OpenAI released a public version of ChatGPT and made it available to everyone. A browser accessible and easy use chatbot that can interact with you and answer questions was a game-changer.”
Both speakers called attention to the importance of vendor risk management.
Vishnevetsky warned about potential data ownership issues, stating, “Understanding data ownership is crucial when engaging with AI vendors. It is paramount to clarify whether your data remains your property or is shared with the vendor, as this has legal and security implications.”
Reilly mentioned Microsoft’s approach to data ownership and transparency, where data used with Microsoft Copilot remains within the user’s tenant, allowing for more control and privacy. She pointed out how in the face of a wave of litigation, companies are evolving their positions on certain legal issues. For example, Microsoft recently announced it would take responsibility for copyright and IP issues that spring up through certain paid subscriptions.
“If there are IP issues with generated output, Microsoft says it will be on the hook,” Reilly said. “That’s not for their free version. It’s for some paid subscriptions, but it’s an example of how this issue is really kind of shaping over time. At first it was, ‘We’re not responsible, we’re not going to do anything about it.’ Now, Microsoft is saying, ‘Well, how do we really encourage people to use these tools if they feel like using these tools carries this big risk?’ So they’ve actually taken on ownership of this risk.”
The speakers called data ownership a moving target, with new developments coming out daily, urging those listening to watch how companies evolve their positions on the legal aspects of AI usage.
Vishnevetsky demonstrated the use of AI for malicious activities, such as email attacks, password cracking, and advanced AI-generated attacks. A video presentation showed how AI can take a short one-minute recording of a real human voice and turn it into a lengthy impersonation.
An AI impersonation showed by Vishnevetsky instructed parties in a real estate transaction to change wiring instructions, perfectly replicating its intended target’s voice.
“Voice cloning has been existence for probably over 10 years now,” he said. But before it was too robotic. Remember your first answering machine? It was super robotic. You could program it but it was still mechanical. Now the continuously increasing computer power and more advanced technology makes synthesized voice more natural and intuitive. The possibility to improve the pitch, pauses, the intonation and everything else is infinite. It took me 15 minutes to create this (AI impersonation).”
Reilly provided her thoughts on privacy and confidentiality issues, explaining, “We’re seeing a lot of litigation. There have been recent suits dealing with privacy, violation of privacy rights, specifically. There’s been concern with tools potentially profiling individuals using their personal information, so that makes it even more important to be cautious with the data inputs and how information is used.”
Difficulty in AI’s handling of individuals’ names also came up, illustrating the need for careful data management.
In conclusion, both speakers emphasized the importance of mitigating risks associated with AI usage, highlighting the need for diligence in handling data and potential regulatory changes.
President Joe Biden issued an executive order on AI regulation on Oct. 30, roughly three weeks after ALTA ONE 2023’s conclusion. That included numerous provisions such as requiring developers of the most powerful AI systems to share safety test results and other critical information with the U.S. government.
While regulation has yet to catch up with the technology, Reilly reminded the audience that regulators such as the Federal Trade Commission and Consumer Financial Protection Bureau take the position that existing laws allow for enforcement against owners of AI tools.
“We already have a lot of rules on the books with unfair and deceptive trade practices,” she said. “We can utilize that from an enforcement standpoint if we see something in the market we don’t like or if we’re hearing of a potential bias or discrimination or other potential consumer harms.”