Skip to main content

VicUnLocked-30b-LoRA

Developer: Neko Institute of Science

Overview

VicUnLocked-30b-LoRA is a highly experimental language model developed by the Neko Institute of Science, derived from the Vicuna model developed by LMSYS. This model, using LoRA architecture, has been fine-tuned on user-shared conversations from ShareGPT.

Base Model

The base model for VicUnLocked-30b-LoRA is LLaMA, an auto-regressive language model based on the transformer architecture.

Unique Features

As an experimental model, VicUnLocked-30b-LoRA offers the potential for unique insights and outputs. However, its experimental nature can also lead to unpredictable results.

Training Method

The model was fine-tuned using a supervised instruction approach. Training was done for only one epoch due to the large size of the model and computational constraints.

Training Data

The model was trained on the Aeala/ShareGPT_Vicuna_unfiltered dataset, which includes conversations collected from ShareGPT.com.

Data Freshness

This model does not incorporate new data after training, its understanding is solely based on the information available during its training period.

Commercial License

VicUnLocked-30b-LoRA is released under a Non-commercial license. It is primarily for research purposes and not intended for commercial use.

Safety and Alignment

Given the experimental nature of this model, users are advised to proceed with caution and responsibility, as it might produce unpredictable results.

Uses and Applications

VicUnLocked-30b-LoRA is primarily used for research on large language models and chatbots. However, its unlocked nature calls for careful and responsible use.

Performance and Benchmarks

VicUnLocked-30b-LoRA has been evaluated using standard benchmarks, human preference, and LLM-as-a-judge. More specific performance metrics in comparison to Vicuna-33B and other similar models can be found in the benchmarks leaderboard.

Note: As an experimental and unlocked model, it should be used with caution and responsibility.