- Published on
Building a Financial Tracker with LLM Insights
- Authors
- Name
- Eriitunu Adesioye
- @Eri_itunu
Overview
Over the past few months, I built a fullstack Next.js financial tracker — initially just for personal use. The main goal was to track my spending while exploring Next.js fullstack capabilities, including authentication, data persistence, and data visualization.
At first, the app didn’t include any AI features. It focused purely on categorization, visualization, and authentication. But as I iterated, I started experimenting with LLMs to generate spending summaries and personalized insights.
Integrating AI features
I began testing with popular commercial APIs like OpenAI and DeepSeek, but quickly found out that most of them were no longer free. To continue prototyping, I switched to a free LLM from OpenRouter. This allowed me to experiment with generating text-based spending insights — such as monthly summaries and recommendations — without incurring extra costs.
Exploring RAG for deeper insights
Next, I explored implementing Retrieval-Augmented Generation (RAG) to produce context-aware insights. The idea was to connect the model to historical user data and patterns — for example, surfacing trends or highlighting categories where spending was growing fastest.
However, I ran into a few practical challenges:
- My dataset was too small and inconsistent for meaningful retrieval.
- I lacked enough structured historical data to ground the LLM’s responses.
- The generated outputs often felt generic rather than actionable or personalized.
Because of these limitations, I decided not to pursue RAG further — at least until the dataset becomes richer and more consistent.
Final setup
In the end, I refined the system to focus on lightweight AI-driven insights, like summarizing spending trends or highlighting outliers. These were simple enough to be reliable, while still adding value beyond static charts or category breakdowns.
The result is a financial tracker that’s practical, data-aware, and AI-augmented — even if not fully “intelligent” yet. It taught me valuable lessons about where AI adds value in small-scale apps, and where it still depends heavily on the quality and depth of available data.
Takeaway
Building this project helped me understand the real-world trade-offs of integrating LLMs into everyday tools. Sometimes, it’s better to refine what works reliably than to over-engineer for intelligence that your data can’t yet support.