Patchnotes π
v0.5 ---- 21.04.2024
This version upgrade of SKAI introduces several new feature, with more to come in the near future. Here is a breakdown of the changes:
Major Changes:
- Added Meta AI Provider
llama3-70b
: A new and very capable open source model by Meta AI.
llama3-8b
: The smallest new llama3 model but still very powerful.
- Note: While these models are hosted via our Azure environment, they are hosted in the US and might not be GDPR compliant. Please do not feed these models any sensitive data.
- Added SD3 Image Provider
Stable Diffusion 3
: The best open source image generation model available to date.
Stable Diffusion 3 Turbo
: A faster and slightly less powerful version of the SD3 model.
- New Demo Audio Transcription:
- You can now transcribe audio files using the new demo.
- The demo is powered by OpenAI's Whisper model hosted in our Azure environment.
Minor Changes:
- Images do not post on SKAI by default anymore. You have now have to enable the switch in the settings to post images or you can post images after creation.
- Fixed a bug in the data pipeline of PIA that caused recent data to not be processed correctly.
Outlook:
- The document comparison demo will be updated with a new UI, giving users access to a pdf-viewer to inspect findings in the document. Once this is implemented the PDF-Viewer will be implemented for namespaces as well.
- I am almost done working on a new demo that will take a pdf file and identify legal requirements in the document. The demo will also use the new PDF-Viewer component.
- Image upscaling and editing, aswell as image-to-image generation will be added in the Image Generation demo. This will be powered by the new SD3 models.
- A new centralized Chat Interface that will enable users to switch between models independent of the model provider.
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.4.3.2 ---- 19.03.2024
This is a minor release adding one new model and fixing some bugs.
Added LLM Provider:
- Azure Mistral
mistral-large
: Similarely to the OpenAI models we now have a private version of mistral-large hosted in our Azure environment in France.
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.4.3.1 ---- 12.03.2024
This is a minor release adding two new models and fixing some bugs.
Added LLMs:
- Cohere
command-r
: A new open model from Cohere. This model is supposed to be more powerful than gpt-3.5-turbo.
- Note: This model is integrated via the Cohere API. Please be aware that it might not be GDPR compliant or hosted in Europe.
- Anthropic
Claude-3-haiku
: A new super fast multimodal model from Anthropic with long context window. It is the least powerful of the new claude-3 models but it is still very powerful.
- Note: This model is integrated via the Anthropic API. Please be aware that it might not be GDPR compliant or hosted in Europe.
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.4.3 ---- 07.03.2024
This update adds the vision preview for all available vision models.
Changes:
- Vision Models
- You can now attach images to a chat message by clicking the attachment icon in the chat input field. This will open a file picker where you can select an image from your device.
- Images must be smalles that 1MB and in the format of .jpg, .jpeg, or .png at the moment.
- You can only attach images when selecting a vision model.
- The vision models are still in beta and might not work for all images or they might produce missleading or wrong results.
Vision Capable Models:
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.4.2 ---- 04.03.2024
This is a minor release adding the new anthropic models.
Added LLMs:
- Anthropic
claude-3-opus
claude-3-sonnet
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.4.1 ---- 03.03.2024
This is a minor release adding new models and fixing some bugs.
Added LLMs:
- Mistral
mistral-large
: Mistrals new GPT-4 competitor. The model is really powerful, being comparable in performance and surperior in speed. If you are using GPT-4-turbo you should definitely give it a try.
mistral-medium
: New closed source mistral model - comparative to mixtral-8x7b and GPT-3.5-turbo
mistral-small
: New closed source mistral model - comparative to mistral-7b
- renamed the old
mistral-small
to mixtral-8x7b (OS)
: This is a third party hosted version of the open source model comparable to GPT-3.5-turbo.
- renamed the old
mistral-tiny
to mistral-7b (OS)
: This is a third party hosted version of the open source model.
- Aleph Alpha:
- Updated the Aleph Alpha models to their latest versions.
- The new models are supposed to be more powerful, specifically in their ability to fit to the chat interface.
- From my first testing they are still noticibly behind other providers.
Outlook:
- Support for the first multi-modal models
- I am specifically working on integrating GPT-4-Vision in our hosted Azure environment. This will allow you to upload images to the chat interface.
- Image Generation
- Image reprompting and inpainting aswell as upscaling images on SKAI
- Video Generation Capabilities (coming soon)
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.4 ---- 22.02.2024
New features and improvements are here! This version introduces new Anthropic Models and updates to the document comparison demo.
π Changes:
- Document Comparison
- Introduced document comparison capabilities. You can now compare two documents and see the differences between them.
- The UI has been reworked to enhance document chunking capabilities and to provide a better user experience.
- Head to /demo/document-comparison try the new document comparison.
- Updates to Chat Models
- Added new Anthropic models:
claude-2.1
claude-instant-1.2
- Removed legacy OpenAI models (GPT-3.5-turbo-legacy and GPT-4-32k-legacy).
- Fixed Token Streaming
- Fixed an issue where token streaming was not working properly for most providers.
- Lots of UI improvements and bug fixes.
Outlook:
- Image Generation
- Image reprompting and inpainting aswell as upscaling images on SKAI
- Video Generation Capabilities (coming soon)
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.3.1 ---- 02.01.2024
This is a minor update adding more image generation models aswell as delivering some bug fixes and UI changes.
π New Features:
- Image Generation
- Added Stability AI image generation models (Stable Diffusion XL). Try them out by selecting them from the settings tab here: /demo/image-generation
- You can now post images on SKAI. You can also like and comment Posts on SKAI
Outlook:
- Image Generation
- Image reprompting and inpainting aswell as upscaling images on SKAI
- PIA Update
- Email notifications for PIA
- More platforms to screen for
Thank you for exploring the new features! For any inquiries or feedback please reach out!
v0.3 ---- 22.12.2023
This version adds new model providers and introduces the capability to generate images using DALL-E 3. Check out the new additions and be aware of the compliance specifics regarding the new providers!
π New Features:
Thank you for exploring the new features! For any inquiries or feedback please reach out!
v0.2.1 ---- 24.11.2023
This version is a minor release. It introduces new OpenAI models on Azure aswell as some minor UI improvements and bug fixes.
π§ Changes:
- The ChatGPT model has been replaced by the new GPT-3.5-turbo model. Differences to the old model are:
- Default context window of 16,385 tokens, replacing the old 16k model.
- Note: The model returns a maximum of 4,096 output tokens.
- The base GPT-4 model has been replaced by the new GPT-4-turbo model. Differences to the old model are:
-
Default context window of 128,000 tokens. Please use GPT-4-turbo instead of GPT-4-32k from now on.
-
The models internalized knowledge has been updated to April 2023.
-
Note: The model returns a maximum of 4,096 output tokens.
-
Note: GPT-4-turbo is much cheaper than the old model, so please use it instead of the old model.
If you are interested in more specific information about the models, please refer to the OpenAI API release post.
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.2 ---- 30.10.2023
This version is one of the biggest releases so far. It comes with a lot of new features and improvements.
π Important Notes
- π LLMs@SKAD is now officially called SKAI.
- π¨ This version comes with heavy back-end changes. Unfortunately, all previous user data (such as namespaces, personas, etc.) has been deleted. If you have crucial data that you NEED to keep, please contact me at
[email protected]
.
- π There will be a new URL shortly skai-app.com. The current URL will, however, still be available in the foreseeable future.
π§ Changes
- π New Pages
- π Personas -> Create and manage your own personas
- π Account -> Manage your account settings
- π More settings will be added soon. Right now, you can only change your username and issue API keys.
- π API docs are coming soon.
- ποΈ Upload to Knowledge Database is now called Namespaces!
- π οΈ New Features
- π You can now see and edit which files are in your namespace.
- βοΈ You can now give users write permission to your namespaces. This allows them to upload and delete files from your namespace.
- π Namespaces allow for more file types now. (PDF, TXT, MD, DOCX, and JSON!)
- π Namespaces now store metadata about the files they contain. This improves retrieval quality and lets you see where information came from when a chatbot is citing documents.
- βοΈ You can now determine how files are being split when uploading them to a namespace. Simply open the settings next to the upload field and select the desired option.
- π Namespaces can now enhance retrieval by generating summaries and artificial metadata for files. Choose the
Enable Enhanced Processing
in the upload settings to enable this feature.
- π₯οΈ You can now start and stop open-source models directly from the Chat UI. Simply select an open-source model and click on the
boot
button. This will try to issue a new model instance. Models will automatically shut down 15 minutes after they have last been used. Note that this requires compute resources which are not always available. If you keep getting errors trying to boot a model there might not be any capacity at the moment!
- ποΈ You can now use speech-to-text in the chat UI. Simply click on the microphone icon in the input field and start speaking. This feature is still experimental and might not work for all browsers. Make sure to select the language you are speaking in the settings.
- π Improvements and Bugfixes
- βοΈ The chat UI now lets you open and close the settings panel.
- π Lots of small UI improvements and bug fixes.
Thank you for using the platform! If you have any questions or suggestions or have found any bugs, please let me know!
v0.1 ---- 19.09.2023
This version is intended as the first experimental release. There have been ongoing efforts to productionize the platform, but it is still in a very early stage. I am currently focusing on a lot of backend-related tasks, so you might not see many changes on the site for a while. I am mainly working on making the platform more stable and secure. However, if you have any suggestions or ideas, please let me know!
Changes
- πΈοΈ The main URL has changed to https://skad-openai-frontend-production.up.railway.app/! It is possible that the URL might change again soon due to the fact that I am still in the process of productionizing the platform. But I will keep you updated!
- β»οΈ The OpenAI provider has been replaced by the Azure OpenAI provider. This has no effect on users; however, requests will now be processed in our European Azure environment. This does not mean that data privacy concerns are gone!
Note that message streaming might be laggy using the Azure OpenAI provider. Sadly, I cannot improve this since this seems to be an issue with the Azure servers.
- π§βπ€βπ§ Namespaces can finally be shared with other users!
- To share a namespace, go to /demo/namespaces, select a namespace, and click on the share button.
- Add as many people as you want to share the namespace with.
- Sharing a namespace gives read-only permissions to users. They cannot edit or delete your namespace.
- π₯Έ Guest accounts are now fully supported. If you want a second account for testing or would like to give a client access to the platform, contact me at
[email protected]
.
- π¨ The namespace editing page has received a much-needed UI update.
I am trying to rework most pages on the site to make them more user-friendly. If you have any suggestions, please let me know!
- π‘ Automatic webscraping is now supported!
- When you include a URL in your request, the platform will automatically scrape the website and include the text in your request. This is great if you have some webcontent that you want to analyze.
- The resulting text content can be quite large depending on the page. Consider using large context models if your request fails!
- You can disable this feature by checking the
Disable Webscraping
option in the settings. This can be helpfull if you accidentally include a URL in your request (e.g. in some text you copied).
- Note that this feature is still experimental and might not work for all websites.
- π§β𦱠New persona
Proofreading
or (Grammatik- und RechtschreibprΓΌfung
), lets you quickly check text for spelling and grammar mistakes.
- βΉοΈ Major features are now annotated by Tooltips. Hover over the info icon to see a description of the feature(s).
- π Failed namespace retrievals now show up on messages as a yellow
info
tooltip and suggest further action.
- π΄ Open Source Models are now displayed as
offline
when they are sleepingπ€
- πΎ Lots of bug fixes, performance improvements, and minor UI changes.
...oh, and also Patchnotes β¨β¨
Outlook
There are a few things planned for future releases:
- Prefetch checks for requests
- In the future, your requests will be checked for sensitive data by an independent model before they are sent to the Provider. This will help prevent data leaks. I have already trained a custom model for this, so expect this feature to be released soon!
- Code interpreter-like capabilities for any model
- I have been working on this for quite a while now. It is a very complex feature, and I am not sure when it will be ready for release since it raises a lot of security concerns. But I am working on it!
- Requestable Open Source models
- The next release will include a feature that allows you to request a deployment of any Open Source model. Costs have been a major issue for Open Source models, so hopefully, this feature will help with that.
- Landing Page UI update
- Account System
Thank you for using the platform! If you have any questions or suggestions, please let me know!