- ExpressVPN vs NordVPN: Which VPN is best for you?
- Ultramarine Linux 40 continues to be one fine unofficial Fedora Spin
- TunnelBear VPN review: An affordable, easy-to-use VPN with few a few notable pitfalls
- VMware Product Release Tracker (vTracker)
- I use this cheap Android tablet more than my iPad Pro - and it costs a fraction of the price
Making the gen AI and data connection work
With all the hype surrounding gen AI, it’s no surprise it’s a dominating AI solution for companies, according to a Gartner survey released in May. Twenty-nine percent of 644 executives at companies in the US, Germany, and the UK said they were already using gen AI, and it was more widespread than other AI-related technologies, such as optimization algorithms, rule-based systems, natural language processing, and other types of ML.
The real challenge, however, is to “demonstrate and estimate” the value of projects not only in relation to TCO and the broad-spectrum benefits that can be obtained, but also in the face of obstacles such as lack of confidence in tech aspects of AI, and difficulties of having sufficient data volumes. But these are not insurmountable challenges.
Privacy protection
The first step in AI and gen AI projects is always to get the right data. “In cases where privacy is essential, we try to anonymize as much as possible and then move on to training the model,” says University of Florence technologist Vincenzo Laveglia. “A balance between privacy and utility is needed. If after anonymization the level of information in the data is the same, the data is still useful. But once personal or sensitive references are removed, and the data is no longer effective, a problem arises. Synthetic data avoids these difficulties, but they’re not exempt from the need of a trade-off. We have to make sure there’s a balance between various classes of information, otherwise the model becomes an expert on one topic and very uncertain on others.”