Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

What’s Required to Exploit Generative AI Responsibly?

Generative AI creates a new wave of innovation and possibility but is not without risk. Executives feel driven to take advantage of AI but are also jump-out-of-their-skin scared of the risks — and rightfully so.

AI has been around for decades, but beginning in the late ‘90s, there was a sea change. With the increased application of machine learning, and particularly the use of neural networks, a new path opened and cracked long-existing challenges. Instead of applying logic or rules-based algorithmic approaches to problem-solving, data scientists turned neural net and reinforcement machine learning loose on vast amounts of data with a goal-oriented approach in which the system trained itself by trying all possible paths to a goal and scoring those most likely to yield the desired result.

Examples might be found in figuring out which moves led to a win in chess at various stages, or in machine translation of text (where the model searches for similarities in semantics rather than following the path of more literal translations, in which individual words are substituted).

The model also receives ongoing feedback, which refines the knowledge of the semantics. One of the characteristics of this “generative” approach, and perhaps the scariest, is that although the solutions it produces can be shown to work, the way in which the model arrived at them is not always clear.

Any shiny, buzzy concept usually has highs and lows, and the world of generative AI is no exception.

Teams applying it must temper high excitement with concern to ensure responsible use. Here are a few considerations enterprises can employ to use the technology meaningfully without getting carried away.

Recognize the Present and Future of AI

Companies must think ahead to where they believe generative AI will have the most business impact but also realize and accept that we are in a period of discovery. While focusing on near-term needs, enterprises need to simultaneously develop a plan for applying these technologies at scale and understand how they can ensure the accuracy or truth of responses from AI-based systems. We know that the data volumes generative AI applications require will increase dramatically. Planning now for this increase in data, both from within the enterprise and from external sources, is vital. Enterprises should start to catalog the data they’ll need to provide to the large language models (LLM) but also ground the LLM responses and condition the questions in facts and truths.

Artificial Intelligence Story of the Week: IBM Announces Availability of watsonx Granite Model Series Client Protections for IBM watsonx Models

Our first impulse may be to apply generative AI broadly. There’s near-term value to be harvested in doing so. However, those who thoughtfully apply contextual data specific to their enterprise will gain the strongest advantage.

Related Posts
1 of 6,921

Don’t Let Hallucinations Go Unchecked

Generative AI is at the heart of LLMs.

The term “generative” implies that an LLM produces output that appears to result from intelligent thought. An LLM strings together text, video frames, or other assets to generate responses that are not original. However, it may miss logical, biological, ethical, social, and cultural factors. Without contextual data to inform LLMs, inaccurate answers or hallucinations may be unavoidable. Think of the LLM as a well-read individual willing to provide an answer to any question, but one who has no feel for the potential impact of a wrong answer. Providing LLMs with data specific to certain circumstances minimizes the probability of hallucinations. Users must ground outcomes in “truths.” Accuracy is context-sensitive. Generalized LLMs need to be amended with information specific to the question and, more to the point, with information that’s up to date.

Ensuring that decisions are based on the most current information makes a difference in a world where increasing rates of change are becoming the norm.

Cache Validated Results

One of the most adjacent applications of generative AI is answering questions for customers and employees about products, processes, and policies. By caching validated results and searching there first, enterprises can not only return answers faster but also optimize resource use. A side benefit is knowing the answers are correct.

Overall, organizations that are both excited and concerned about AI are approaching it responsibly. We are all in new territory here. AI can provide immense cost savings and open routes of innovation in science, product design, and decision-making within business. But we must be intentional and aware as we apply this technology to avoid mistakes that we won’t be able to perceive until possibly too late. By applying context and using data to augment and guide LLMs, we can take advantage of it safely.

And, by caching results from LLMs and searching through these, we can be more cost-efficient.

Recommended AI News: Australians Rate AI Applications based on Trust, Friendliness, and Diversity

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.