lunes, 28 de octubre de 2024

Implementing AI Locally: Everything You Need to Know

In this article we learn about how to install or implement IA in your local PC, we need to install this components for use like ChatGPT:

  1. Install Ollama
  2. Install openwebui (interface web for chatbot)

You need to get the answers of your Local IA. it will be fast or slow according to power of the computer, in my case I have common and basic PC:

  • LapTop Thinkpad L490
  • 16GB RAM
  • 500GB HDD

1. Install Ollama

Just need to visit the oficial website ollama and install on Linux

curl -fsSL https://ollama.com/install.sh | sh

After that you need to run:

ollama run llama3.1

In that;s line ollama download the model if is the first time.
You can to installl multiple model but in my case i use llama3.1 because is really light

after to run the command we’ll see the command line interface to write with the IA-OLLAMA attach the image below

2. Install Open Webui

Now for to make more confortable to typing and interact with the ia we go to install a ui-web interface name openwebui

The simple installation: using docker Copy and paste the next command for to start the web service

this command show the ollama serve runing (for to know the URL):

# url = 127.0.0.1:11434
ollama serve

Here below code we need to change or repace the variable OLLAMA_BASE_URL if is neccesary:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

If is not working: on http://localhost:8080/ you must be run the previus command without -d this running the container in the foreground for see all debuging process and watch what’s the error and you can fix it:

docker run --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

And after to close the terminal, for start again the chat-web
You need to start the container with docker command:

docker start open-webui

Now we can chat with the IA on a web interface and this website run in http://localhost:8080/ just register and enter.


Conclusion

  • Personally I installed ollama locally because chatgpt crashes a lot and I wasted a lot of time refreshing the page.
  • Setting up an AI system locally on your PC can be a rewarding experience, allowing you to leverage powerful models like Llama 3.1 without relying on external servers.

References

No hay comentarios:

Publicar un comentario