File size: 6,802 Bytes
c19a236
a7bda2d
 
 
 
 
 
 
 
 
 
 
c19a236
a7bda2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c19a236
3820ac6
a7bda2d
 
 
 
 
 
 
 
 
4256f31
 
 
 
 
 
 
 
a7bda2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
584d209
 
 
a7bda2d
 
 
 
 
 
 
 
 
 
 
584d209
 
 
 
a7bda2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c19a236
 
a7bda2d
c19a236
a7bda2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c19a236
4c202a8
c19a236
a7bda2d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
---
language:
- tr
- en
- de
- es
- fr
- ru
- zh
- ja
- ko
license: mit
tags:
- turkish
- türkiye
- reasoning
- ai
- lamapi
- gemma3
- next
- next-x1
- text-generation
- open-source
- 14b
- large-language-model
- llm
- transformer
- artificial-intelligence
- machine-learning
- nlp
- multilingual
- instruction-tuned
- chat
- generative-ai
- optimized
- trl
- sft
- cognitive
- analytical
- enterprise
pipeline_tag: text-generation
datasets:
- CognitiveKernel/CognitiveKernel-Pro-SFT
- OpenSPG/KAG-Thinker-training-dataset
- QuixiAI/dolphin-r1
- uclanlp/Brief-Pro
- Gryphe/Opus-WritingPrompts
- GreenerPastures/All-Your-Base-Full
- dongguanting/ARPO-SFT-54K
- Medint/Multi-Med-conversational
- mlabonne/smoltalk-flat
- mlabonne/natural_reasoning-formatted
- QuixiAI/open-instruct-uncensored
- mlabonne/open-perfectblend
library_name: transformers
---

<img src='assets/banner.png'>

# 🧠 Next 8B (m427)

### *Türkiye’s Compact Reasoning AI — Logical, Analytical, and Efficient*

[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Language: Multilingual](https://img.shields.io/badge/Language-Multilingual-red.svg)]()
[![HuggingFace](https://img.shields.io/badge/🤗-Lamapi/Next--8B-orange.svg)](https://huggingface.co/Lamapi/next-8b)

---

## 📖 Overview

**Next 8B** is an **8-billion parameter large language model (LLM)** built on **Qwen 3 architecture**, optimized for **reasoning and analytical performance**.
It’s **Türkiye’s reasoning-capable compact AI**, designed to think, infer, and solve problems efficiently.

Focused purely on **cognitive tasks**, it excels in problem-solving, abstract logic, and multilingual understanding (Turkish, English, and more).

---

## ⚡ Highlights

* 🇹🇷 **Türkiye’s compact reasoning AI**
* 🧠 **Logical, analytical, and inferential reasoning**
* 🌍 **Multilingual support (Turkish, English, 30+ languages)**
***Lightweight and efficient**
* 💬 **Instruction-tuned for dialogue, tutoring, and analysis**

---

## 📊 Benchmark Performance

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>MMLU (5-shot) %</th>
      <th>MMLU-Pro %</th>
      <th>GSM8K %</th>
      <th>MATH %</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Next 14B (Thinking)</td>
      <td><strong>94.6</strong></td>
      <td><strong>93.2</strong></td>
      <td><strong>98.8</strong></td>
      <td>92.7</td>
    </tr>
    <tr>
      <td>Next 12B</td>
      <td>92.7</td>
      <td>84.4</td>
      <td>95.3</td>
      <td>87.2</td>
    </tr>
    <tr class="next">
      <td><strong>Next 8B (Thinking)</strong></td>
      <td>91.0</td>
      <td>88.5</td>
      <td>96.2</td>
      <td>88.0</td>
    </tr>
    <tr>
      <td>GPT-5</td>
      <td>92.5</td>
      <td>87.0</td>
      <td>98.4</td>
      <td><strong>96.0</strong></td>
    </tr>
    <tr>
      <td>Claude Opus 4.1 (Thinking)</td>
      <td>~92.0</td>
      <td>87.8</td>
      <td>84.7</td>
      <td>95.4</td>
    </tr>
  </tbody>
</table>



---

## 🚀 Installation & Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Lamapi/next-8b"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

messages = [
    {"role": "system", "content": "You are Next-X1, a reasoning-capable AI assistant created by Lamapi. You think logically, reason efficiently, and answer concisely."},
    {"role": "user", "content": "Explain why the sky appears blue using logical reasoning."}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

---

## 🧩 Key Features

| Feature                                | Description                                                                  |
| -------------------------------------- | ---------------------------------------------------------------------------- |
| 🧠 **Efficient Reasoning**             | Strong in abstract logic, critical thinking, and structured problem-solving. |
| 🇹🇷 **Multilingual Intelligence**     | Deep Turkish understanding with 30+ language support.                        |
| ⚡ **Lightweight & Optimized**          | Quantized formats (Q8_0, Q4_K_M, FP16) for efficiency.                       |
| 🧮 **Mathematical & Analytical Skill** | Handles structured reasoning and moderate complexity problems.               |
| 🧩 **Non-Vision Architecture**         | Focused on text-based cognitive tasks.                                       |
| 🏢 **Reliable & Consistent**           | Predictable outputs suitable for professional use.                           |

---

## 📐 Model Specifications

| Specification     | Details                                                       |
| ----------------- | ------------------------------------------------------------- |
| **Base Model**    | Qwen 3                                                        |
| **Parameters**    | 8 Billion                                                     |
| **Architecture**  | Transformer (Causal LLM)                                      |
| **Modalities**    | Text-only                                                     |
| **Fine-Tuning**   | Instruction-tuned with reasoning datasets                     |
| **Optimizations** | Quantization-ready, FP16 support                              |
| **Primary Focus** | Reasoning, logic, decision-making, and language understanding |

---

## 🎯 Ideal Use Cases

* **Compact Analytical Chatbots**
* **Research Assistance** (scientific/legal)
* **Education & Tutoring**
* **Code & Algorithm Design**
* **Decision Support Systems**

---

## 💡 Performance Highlights

* **Efficient Reasoning:** Compact yet powerful logical reasoning.
* **Good Mathematical Understanding:** Handles structured problems reliably.
* **Lightweight & Fast:** Ideal for resource-conscious environments.
* **Consistent Outputs:** Professional-grade reliability in smaller footprint.

---

## 📄 License

Licensed under **MIT License** — free for commercial and non-commercial use.

---

## 📞 Contact & Support

* 📧 **Email:** [[email protected]](mailto:[email protected])
* 🤗 **HuggingFace:** [Lamapi](https://huggingface.co/Lamapi)

---

> **Next 8B** — compact *reasoning-capable* AI, blending **logical depth**, **analytical efficiency**, and **lightweight reliability**.

[![Follow on HuggingFace](https://img.shields.io/badge/Follow-HuggingFace-yellow?logo=huggingface)](https://huggingface.co/Lamapi)