Search...

Type above and press Enter to search. Press Esc to cancel.

April 29, 2024 | 4 Mins Read

4 Critical Factors to Consider While AI Legislation Continues to Develop

April 29, 2024 | 4 Mins Read

4 Critical Factors to Consider While AI Legislation Continues to Develop

Share

Earlier this year, the European Union adopted new rules around how artificial intelligence (AI) can be used by both public and private organizations. While legislation is still developing in the U.S., service organizations that want to leverage AI in their operations should be paying attention to these emerging laws.

The rules adopted by the European Parliament address privacy concerns (such as using images scraped from the Internet to create facial recognition databases), while also requiring certain types of AI systems to reduce risks and ensure human oversight. Those systems include things like vocational training, law enforcement, border management and others.

A key element is transparency of the models and data these AI systems are based on, reflecting a concern that a lot more people have about these AI platforms – knowing what data the algorithms are sampling.

This is important for potential new use cases of AI, because the quality of the data being fed to AI solutions counts. Without getting too deep in the weeds, the types of AI solutions based on ChatGPT are not really thinking so much as analyzing data and providing a synopsis, an answer, or an output based on previously existing material.

For general purpose AI, content providers are already pushing back against the use of copyrighted materials – like the contents of the New York Times – that are being ingested by these systems. The types of AI solutions being used or proposed in field service are less prone to copyright violations, but they still need human-created content – technical manuals, repair data, customer service scripts, and more. Eventually, though, the supply of original content can run dry and that's when AI models can go sideways.

AI Hiccups

One widely documented phenomenon is chatbot hallucination. If you pressure a generative AI system long enough, it may provide confident-sounding results that are, in fact, complete fabrications (this may be the most human-like quality of AI, come to think of it.) These hallucinations can be the result of model complexity, inaccurate source data, or training data bias.

While AI solution providers are working to fix this problem, some researchers have declared it an unsolvable part of AI – their take is that these models are making guesses and cannot really separate fact from fiction. In more creative pursuits, these hallucinations can be funny or, in some cases, inspiring. In more technical applications, they can be disastrous. AI models can also be vulnerable to cyberattacks, with third parties deliberately tweaking input data to induce hallucinations.

Another issue is called AI model collapse, which occurs when the AI solution starts using other AI-generated content, essentially causing the solution to eat its own tail, figuratively speaking. Once enough of this so-called synthetic data is fed into the model, the results become increasingly nonsensical.

In fact, in one study an AI large language model (LLM) was fed synthetic test data to generate text about English architecture until its responses became strange and curiously focused on jackrabbits. AI image generators trained on AI art have also been shown to create increasingly indecipherable results.

So, for service organizations evaluating AI solutions that can help guide technicians through a repair, help build better routes, or help improve maintenance scheduling based on equipment performance, there are four critical factors to consider:

  1. What do you want the AI platform to do? In service, the best current scenario given the maturity of technology, is to have it operate in a co-pilot mode, helping team members make decisions where there are a lot of variables in play – things like routing, scheduling, troubleshooting, predicting future maintenance or part needs, etc.
  2. What data is being used to train the AI platform? That information, whether it is from public/shared sources (like maps) or company-specific information, should be clean and accurate and, critically, created or vetted by actual people. AI models built on other AI-generated models will degrade results over time.
  3. Is the AI platform in compliance with existing privacy and intellectual property regulations? This will vary by region (and in the U.S., at least, things are somewhat in flux). The key is to make sure you are not violating the privacy of your clients, and that the AI solution is not creating models based on proprietary information that belongs to someone else without their permission.
  4. How will AI outputs be used by team members? AI solutions do not really make decisions, per se; they make very educated guesses based on ingesting a lot more data than a person could ever hope to consume. In service, the best-case scenario now is that the software makes recommendations, and a team member evaluates those recommendations based on their own experience and observations of actual conditions to make what is (hopefully) a better, faster decision.

I have written before about how AI can be used in field service here, here, and here. If you have thoughts on how AI can be used (or not!) in service, I would love to hear from you.