Lokale AI met LM Studio

Je kunt tegenwoordig geen steen gooien zonder iets “AI” te raken. Nu mist de “I” uit AI nog overal, maar sommige dingen kunnen dat wel geloofwaardig nadoen. Één van de huidige hype vlakken zij Large Language Models (LLM), software met datasets die plausibel klinkende antwoorden kunnen geven op vragen. Waarschijnlijk ben je wel bekend met commerciële LLMs, zoals bijvoorbeeld ChatGPT.

Image by Vincent van Dam

Solving Sudoku's the hard way

Sudoku’s are fun! For this reason, we included one if one of our traditional easter puzzles, and even made nostalgic 8bit video games of them in the past. If you enjoy solving them the old-fashioned way on a hard copy, it’s recommended to have a pencil handy, as you might need to address how to deal with multiple options.

A couple of years ago, I didn’t have a pencil nearby and used a pen instead. Wrong choice! I ended up in a situation where I had to guess and my pen made that guess final. The good thing about guessing in sudoku’s is that you can progress quite quickly, the bad thing is that you only see the consequence of your guess at the very last moment. So, that’s what happened to me. I had to guess and at the time my error was obvious, it was too late to go back. This frustrated the hell out of me, and I felt I needed to compensate.

Image by Vincent van Dam

Writing a Visual Studio Code extension to chat with your code, an experiment

Chatbots everywhere, handy assistants in applications, AI is booming and particularly generative models. More than a year after the disruptive release of ChatGPT lots has happened. Language models are everywhere and tooling to run these are becoming easier as well. Let’s take one of these local serving models, Ollama and create a Visual Studio Code extension that will use this model server for answering questions about our code base using one of the community AI models.