The 57% Hallucination Rate in LLMs: A Call for Better AI Evaluation

The 57% Hallucination Rate in LLMs: A Call for Better AI Evaluation Author: Max Milititski Introduction Large Language Models (LLMs) are rapidly evolving, pushing the boundaries of what AI can achieve. However, the traditional methods used to evaluate their capabilities are struggling to keep pace with this rapid advancement. Here’s why traditional LLM evaluation methods […]

Quantifying and Improving Model Outputs and Fine-Tuning with Tasq.ai

In a recent webinar, we showed how Tasq.ai’s LLM evaluation analysis led to a 71% improvement in results.  In the dynamic world of AI, where every algorithmic breakthrough propels us into the future, Tasq.ai, in collaboration with Iguazio (now part of McKinsey), has set a new precedent in the realm of Machine Learning Operations (MLOps). […]

Image Quality Assessment

The quality of a machine learning model in supervised machine learning is directly correlated with the quality of the data us…