Update README.md
Browse files
README.md
CHANGED
|
@@ -34,8 +34,7 @@ license: mit
|
|
| 34 |
This model focuses on fine-tuning the Llama-2 7B large language model for Python code generation. The project leverages Ludwig, an open-source toolkit, and a dataset of 500k Python code samples from Hugging Face. The model applies techniques such as prompt templating, zero-shot inference, and few-shot learning, enhancing the model's performance in generating Python code snippets efficiently.
|
| 35 |
|
| 36 |
- **Developed by:** Kevin Geejo, Aniket Yadav, Rishab Pandey
|
| 37 |
-
|
| 38 |
-
- **Shared by [optional]:** No additional sharing information provided
|
| 39 |
- **Model type:** Fine-tuned Llama-2 7B for Python code generation
|
| 40 |
- **Language(s) (NLP):** Python (for code generation tasks)
|
| 41 |
- **License:** Not explicitly mentioned, but Llama-2 models are typically governed by Meta AI’s open-source licensing
|
|
@@ -45,9 +44,8 @@ This model focuses on fine-tuning the Llama-2 7B large language model for Python
|
|
| 45 |
|
| 46 |
<!-- Provide the basic links for the model. -->
|
| 47 |
|
| 48 |
-
- **Repository:** Hugging Face
|
| 49 |
-
|
| 50 |
-
- **Demo [optional]:** No demo link provided
|
| 51 |
|
| 52 |
## Uses
|
| 53 |
|
|
@@ -157,13 +155,9 @@ The fine-tuned model showed enhanced proficiency in generating Python code snipp
|
|
| 157 |
|
| 158 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
|
| 159 |
|
| 160 |
-
- **Hardware Type:** Not specified
|
| 161 |
-
- **Hours used:** Not specified
|
| 162 |
-
- **Cloud Provider:** Not specified
|
| 163 |
-
- **Compute Region:** Not specified
|
| 164 |
-
- **Carbon Emitted:** Not specified
|
| 165 |
|
| 166 |
-
|
|
|
|
| 167 |
|
| 168 |
### Model Architecture and Objective
|
| 169 |
|
|
|
|
| 34 |
This model focuses on fine-tuning the Llama-2 7B large language model for Python code generation. The project leverages Ludwig, an open-source toolkit, and a dataset of 500k Python code samples from Hugging Face. The model applies techniques such as prompt templating, zero-shot inference, and few-shot learning, enhancing the model's performance in generating Python code snippets efficiently.
|
| 35 |
|
| 36 |
- **Developed by:** Kevin Geejo, Aniket Yadav, Rishab Pandey
|
| 37 |
+
|
|
|
|
| 38 |
- **Model type:** Fine-tuned Llama-2 7B for Python code generation
|
| 39 |
- **Language(s) (NLP):** Python (for code generation tasks)
|
| 40 |
- **License:** Not explicitly mentioned, but Llama-2 models are typically governed by Meta AI’s open-source licensing
|
|
|
|
| 44 |
|
| 45 |
<!-- Provide the basic links for the model. -->
|
| 46 |
|
| 47 |
+
- **Repository:** Hugging Face
|
| 48 |
+
|
|
|
|
| 49 |
|
| 50 |
## Uses
|
| 51 |
|
|
|
|
| 155 |
|
| 156 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
|
| 157 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
|
| 159 |
+
|
| 160 |
+
|
| 161 |
|
| 162 |
### Model Architecture and Objective
|
| 163 |
|