Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
16
6
24
Emin Temiz
PRO
etemiz
Follow
kramp's profile picture
jasgigli's profile picture
VIATEUR-AI's profile picture
91 followers
·
22 following
https://pickabrain.ai
etemiz
etemiz
etemiz
AI & ML interests
Alignment
Recent Activity
replied
to
their
post
about 12 hours ago
I realized when I ask longer answers to my questions, the models sometimes produce completely opposite answer. What could be the reason? I do mostly CPT. Should I convert my dataset to SFT and give longer reasonings too for it to have integrity? Example: Is the yolk of an egg more beneficial or the white? Answer in 100 words. Answer: Yolk is more beneficial because .......... Example: Is the yolk of an egg more beneficial or the white? Answer in 500 words. Answer: White is more beneficial because .......... Edit: These happen in temp = 0.0
posted
an
update
1 day ago
I realized when I ask longer answers to my questions, the models sometimes produce completely opposite answer. What could be the reason? I do mostly CPT. Should I convert my dataset to SFT and give longer reasonings too for it to have integrity? Example: Is the yolk of an egg more beneficial or the white? Answer in 100 words. Answer: Yolk is more beneficial because .......... Example: Is the yolk of an egg more beneficial or the white? Answer in 500 words. Answer: White is more beneficial because .......... Edit: These happen in temp = 0.0
posted
an
update
5 days ago
what is the safest llm to run in robots? https://youtu.be/byQmJ9x0RWA?t=640
View all activity
Organizations
None yet
etemiz
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
published
an
article
5 months ago
view article
Article
Curation is All You Need
Aug 1
•
2
published
an
article
8 months ago
view article
Article
Fine Tuning Gemma 3 For Human Alignment
May 17
•
4
published
an
article
9 months ago
view article
Article
Benchmarking Human Alignment of Grok 3
Apr 15
•
2
published
an
article
9 months ago
view article
Article
AHA Leaderboard
Mar 30
•
4
published
an
article
10 months ago
view article
Article
Building a Beneficial AI
Mar 16
•
6
published
an
article
10 months ago
view article
Article
Ways to Align AI with Human Values
Feb 26
published
an
article
11 months ago
view article
Article
The AHA Indicator
Feb 1
•
3
published
an
article
11 months ago
view article
Article
DeepSeek R1 Human Alignment Tests
Jan 25
•
1
published
an
article
about 1 year ago
view article
Article
Symbiotic Intelligence
Nov 19, 2024
•
3