vpakarinen's picture
Update README.md
00ca306 verified
metadata
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen3-4B-Thinking-2507
tags:
  - axolotl
  - base_model:adapter:Qwen/Qwen3-4B-Thinking-2507
  - lora
  - transformers
datasets:
  - ICEPVP8977/Uncensored_Small_Reasoning
pipeline_tag: text-generation
model-index:
  - name: outputs/qwen-4b-thinking-lora-uncensored
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.13.0.dev0

base_model: Qwen/Qwen3-4B-Thinking-2507
datasets:
  - path: ICEPVP8977/Uncensored_Small_Reasoning
    type: alpaca
output_dir: ./outputs/qwen-4b-thinking-lora-uncensored

sequence_len: 4096
adapter: lora

lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
  - q_proj
  - v_proj
  - k_proj
  - o_proj
  - gate_proj
  - down_proj
  - up_proj

gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_bnb_8bit
learning_rate: 0.0002
load_in_4bit: true
train_on_inputs: false
bf16: auto

outputs/mymodel

Fine-tuned version of Qwen/Qwen3-4B-Thinking-2507 on the ICEPVP8977/Uncensored_Small_Reasoning dataset.

This lora model will fully uncensor the qwen3 4b thinking model, use alpaca instruction template.