Want to know more?
Are you interested in this project? Or do you have one just like it? Get in touch. We’d love to tell you more about it.
Teams at a leading US university were facing a constant struggle – simple HR-related queries were taking far too long to complete, with answers often based on inaccurate or incomplete data only accessible by colleagues with technical expertise.
Recognising the potential of AI technologies and Large Language Models (LLMs) to enhance decision-making and improve operational efficiency, the university realised it had an opportunity to explore the power of AI while also solving a real-world problem.
Partnering with Equal Experts, the university launched a three-month proof of concept project to transform how it accessed and used HR data with artificial intelligence. Through cloud infrastructure improvements, data engineering best practices and natural language LLM solutions, we supported the university in breaking down barriers to information and streamlining complex HR queries.
The project successfully demonstrated how the university could use AI and LLMs to modernise its HR data management – enabling all users to extract insights effortlessly and make faster, data-driven decisions while also providing a foundation for future LLM adoption across the university.
to develop proof of concept LLM
eliminating delays and bottlenecks
a foundation for expanding AI use
Our client is a leading US university, with a large focus on research. A non-profit organisation, it aims to pursue groundbreaking research while supporting its students through higher education and college life.
Human resources impact every single staff member in an organisation. Teams need to be able to rely on HR data to answer vital questions quickly every day – how many employees are nearing retirement? Who needs a performance review in the coming months? How many vacation days does a team member have left for the year?
But when this information is locked across fragmented, siloed systems with inconsistent definitions and duplicated data, answering simple questions can quickly become a time-consuming and resource-intensive task.
For our client, this challenge presented an ideal opportunity. By using HR data, it could assess its capacity to integrate modern AI technologies within a practical context and solve a real organisational challenge at the same time.
As part of a proof of concept, the university aimed to ingest raw data into a cloud-based data warehouse (Snowflake), improve the data quality and design a robust data model. This would support the organisation to investigate and prototype the use of LLMs to provide natural language access to HR data, simplifying complex data querying and making vital HR data more accessible to users across the university.
Working closely with the university’s HR, data and technology teams, we took an agile, iterative approach to solve the challenge within a tight three-month timeline.
We began by understanding the university’s requirements and data landscape, taking a deep dive into HR data systems, pain points and success criteria. From this, we tackled the underlying data issues systematically, integrating raw HR data into a cloud-based Snowflake data warehouse and building a robust data model for efficient querying. Data quality issues were also addressed by unifying column names, standardising formats, and removing duplicate records.
With access to quality data in place, we turned to the LLM component, evaluating and prototyping an AI-driven solution. We investigated LLMs, including Anthropic, Claude and Mistral models, with Mistral selected as the baseline due to its performance. With complex query generation, we found strategic prompt engineering to be more effective and efficient. We developed a prompt strategy to guide LLMs in generating accurate SQL queries from natural language inputs without the need for costly fine-tuning and integrated the LLM into a user-friendly interface, allowing users to query data themselves and reduce reliance on technical expertise.
By ensuring that stakeholders were thoroughly engaged throughout the process, including through demonstrations and feedback sessions, we were able to develop the solution iteratively, ensuring it would meet the real-world requirements of the end users. We also implemented infrastructure as code, using Terraform to provision cloud infrastructure for repeatability and scalability into the future. Comprehensive documentation, including detailed technical guides and architecture diagrams, will also enable future engineers to build on the proof of concept, supporting long-term value for our client.
The three-month proof of concept successfully validated our client’s ability to integrate AI and LLMs as powerful tools for data management and decision-making.
It resulted in:
Beyond solving the immediate challenges of accessing and utilising HR data for decision-making, the project also laid the foundation for AI-driven innovation across the university. By embedding data ingestion, modeling and cleaning alongside the establishment of the required cloud infrastructure, the organisation can quickly and effectively extend the LLM solution to other data sets and departments, offering long-term value beyond the initial HR data use case.
Are you interested in this project? Or do you have one just like it? Get in touch. We’d love to tell you more about it.