None
AnswerNone
Please wait...It may take 40-120 seconds depending on server status
Please wait while we ingest the documents.
This is a beta module of NID System that empowers its users to leverage the capabilities of generative AI. This feature
makes use of completely localized Large Language Model and vector embedding stores to generate answers to users queries based on information available in NID library. This creates a cutting edge yet secure environment to encode the semantic meaning and context of text, allowing LLMs to understand context and judge similarity when returning answers to query prompts.
At the moment we are testing various embedding schemes and LLMs for optimal results. We are also exploring how effective the system performs on standard CPU servers and value additions in form of GPU based NID hosting infrastructure.
At the moment only limited document sources from NID are being added to NID-GPT vector store once the module matures the functionality will be extended to a larger dataset available in NID and Austria-Forum repository.
1. In order to ask a question, type a question into the search bar
like:
2.
What is NID Library System
3. Hit enter on your keyboard or click
Ask!
4. Wait (Please be patient!!) while the LLM model consumes the prompt and prepares the
answer. Currently processing is done using CPU, once a GPU is added to the system the response time will improve.
5. Once done, it will print the answer and the 4 sources it used
as context from your documents; you can then ask another question
without re-running the script, just wait for the prompt again.