WizardLM: The Most Impressive 7B LLaMA Model in the Market

Published On Mon May 08 2023
WizardLM: The Most Impressive 7B LLaMA Model in the Market

WizardLM: 7B LLaMA Model with Magical Performance - AGI Sphere

WizardLM is a powerful conversational language model with 7 billion parameters and exceptional conversational capabilities. It is a LLaMA model that was fine-tuned with a unique data generation method. As of today, WizardLM is regarded as one of the most impressive 7B LLaMA models available.

Training

The WizardLM model was released in April 2023 and was fine-tuned using a large amount of instruction-following conversations with varying difficulties. The model uses a novel method of generating training data, which involves utilizing a large language model to automatically generate this data.

The training process for WizardLM involved 70k computer-generated instructions using a new method called Evol-Instruct. This method creates instructions with varying difficulty levels, expanding prompts with five operations applied sequentially to an initial instruction to make it more complex. The expansions of instructions and responses are generated using ChatGPT.

The performance of WizardLM was compared to that of Alpaca 7B, Vicuna 7B, and ChatGPT, and 10 individuals judged the responses of WizardLM and other models blindly in five areas: Relevance, Knowledge, Reasoning, Calculation, and Accuracy.

Installation on Mac

There are two ways to run WizardLM on Mac. The first method is using llama.cpp, and the second method is using text-generation-webui. Below are instructions for both methods:

llama.cpp

To install and run WizardLM on Mac using llama.cpp, follow these steps:

  • Step 1: Open the Terminal App and navigate to the llama.cpp directory. Create the model directory.
  • Step 2: Download the model weights.
  • Step 3: Run the WizardLM model.

The model follows the following dialog format:

So a reverse prompt Human: works beautifully.

The command for running WizardLM with llama.cpp is:

   ./llama_inference_offload -model=./q4_0/wizardLM-7B.ggml.q4_0.bin -max_batch_size=4 -gpu=0

text-generation-webui

To install and run WizardLM on Mac using text-generation-webui, follow these steps:

  • Step 1: Open the Terminal App and navigate to the text-generation-webui directory. Create the model directory.
  • Step 2: Download the model weights.
  • Step 3: Start text-generation-webui.
  • Step 4: Navigate to the Model page and select TheBloke_wizardLM-7B-GGM model in the model dropdown manual.

You should see a confirmation message on the bottom right. Now, WizardLM is ready to converse with you on the text-generation page.

Installation on Windows PC

If you're running WizardLM on a Windows PC, follow these steps:

  • Step 1: Install text-generation-webui.
  • Step 2: Navigate to the Model page and enter the following in the Download custom model or LoRA text box: https://huggingface.co/TheBloke/wizardLM-7B-GGML/resolve/main/wizardLM-7B.ggml.q4_0.bin
  • Step 3: Click the Download button and wait for the download to complete.
  • Step 4: Click the refresh icon next to the Model dropdown menu.
  • Step 5: In the Model dropdown menu, select TheBloke_wizardLM-7B-GPTQ and ignore the error message.
  • Step 6: Fill in the following values in the GPTQ parameters section:
  • temperature: 0.7
  • max_length: 512
  • top_p: 0.9
  • repetition_penalty: 1.0

Step 7: Click Save settings for this model so that you don't need to enter these values again. The automatic parameter loading will only be effective after you restart the GUI.

Now, you can converse with WizardLM on the text-generation page. If the model outputs gibberish, that could be because the GUI is loading the incorrect file. Delete the file wizardLM-7B-GPTQ-4bit.latest.act-order.safetensors in the models\TheBloke_wizardLM-7B-GPTQ folder to ensure loading the correct file.

With all these steps, you should now be able to run WizardLM on your machine, whether it's a Mac or a Windows PC, and be amazed by its magical performance.