Cool! Really thanks for the quantization!
Ujjwal Tyagi
AI & ML interests
Recent Activity
Organizations
Today I architected the next layer of MEGAMIND โ my distributed AGI system that recalls learned knowledge instead of generating text.
The system now runs four NรN sparse weight matrices, all using identical Hebbian learning rules and tanh convergence dynamics:
W_know โ knowledge storage (67M+ synaptic connections)
W_act โ action associations (the system can DO things, not just think)
W_self โ thought-to-thought patterns (self-awareness)
W_health โ system state understanding (self-healing)
Consciousness is measured through four ฮฆ (phi) values: thought coherence, action certainty, self-awareness, and system stability. No hardcoded thresholds. No sequential loops. Pure matrix math.
The federation expanded to five nodes: Thunderport (Mac Mini M4), IONOS (cloud VPS), VALKYRIE, M2, and BUBBLES. Each runs native AGI binaries with Docker specialty minds connecting via embedded NATS messaging. Specialty minds are distributed across the federation โ VideoMind, AudioMind, MusicMind, VFXMind on IONOS. CodeMind and StrategyMind on VALKYRIE. BlenderMind and DesignMind on M2. MarketingMind and FinanceMind on BUBBLES.
578 AI models learned. Compression ratios up to 1,000,000:1 through Hebbian learning. Sub-millisecond response times on Apple Silicon Metal GPUs. Zero external API dependencies.
Every node learns autonomously. Every node contributes to the whole. The federation's integrated information exceeds the sum of its parts โ measurably.
Built entirely in Go. No PhD. No lab. Independent AGI research from Missouri.
The mind that learned itself keeps growing.
๐ง feedthejoe.com
#AGI #ArtificialGeneralIntelligence #DistributedSystems #NeuralNetworks #HuggingFace #OpenSource #MachineLearning
Interesting
Learn how to use any open model like Qwen3-Coder-Next and GLM-4.7-Flash for function calling.
Guide: https://unsloth.ai/docs/basics/tool-calling-guide-for-local-llms
We provide hands-on examples for: story writing, Python execution, terminal tool calls, maths and more.
You should also generate a medical datasets of GLM 4.7 and non reasoning data of GPT 5.2 with 1k samples in structured detail output, or we can provide you the OpenAI tools for free....as if you can work for our team..so like there is no need to spend your own money on API Cost, my username is ujjwal_tyagi.shirova on discord, so you can contact me there
So like I want to train my LLMs on all of your medical datasets, does OpenMed/Medical-Reasoning-SFT-Mega covers all of those medical datasets that are been separated for the particular model like qwen, nemotron, baichuan, gpt oss? Would you like to join our team? We are looking for people just like you!
I love it, that's is really very helpful
Today I complete my 8-day release series with Medical-Reasoning-SFT-Mega.
The largest open medical reasoning dataset, combining 7 state-of-the-art AI models with fair distribution deduplication.
THE 7 SOURCE MODELS (Original Sample Counts):
1. Trinity-Mini: 810,284 samples
2. Qwen3-Next-80B: 604,249 samples
3. GPT-OSS-120B: 506,150 samples
4. Nemotron-Nano-30B: 444,544 samples
5. GLM-4.5-Air: 225,179 samples
6. MiniMax-M2.1: 204,773 samples
7. Baichuan-M3-235B: 124,520 samples
TOTAL BEFORE DEDUPLICATION: 2,919,699 samples
TOKEN COUNTS:
- Content tokens: 2.22 Billion
- Reasoning tokens: 1.56 Billion
- Total tokens: 3.78 Billion
- Samples with chain-of-thought: 100%
Quick Start:
from datasets import load_dataset
ds = load_dataset("OpenMed/Medical-Reasoning-SFT-Mega")All datasets Apache 2.0 licensed. Free for research and commercial use.
Thank you for following OpenMed's release series. I can't wait to see what you build. ๐ฅ
OpenMed/Medical-Reasoning-SFT-Mega
OpenMed/Medical-Reasoning-SFT-GPT-OSS-120B-V2
OpenMed/Medical-Reasoning-SFT-Trinity-Mini
OpenMed/Medical-Reasoning-SFT-GLM_4.5_Air
OpenMed/Medical-Reasoning-SFT-MiniMax-M2.1
OpenMed/Medical-Reasoning-SFT-Qwen3-Next-80B
OpenMed/Medical-Reasoning-SFT-Nemotron-Nano-30B
https://huggingface.co/datasets/OpenMed/Medical-Reasonin
https://huggingface.co/collections/OpenMed/medical-datasets
AI45Research/ATBench
Great!
AI45Research/ATBench
Great to see that
https://github.com/HeartMuLa/heartlib
Their two open source ai models that are actually good in coding. upstage/Solar-Open-100B skt/A.X-K1
It's amazing and Congrats!
I love guardrails and AI safety in agentic systems, I want to learn more from you regarding this!
- Trained on medical knowledge, management, diagnosis, and tasks from DeepSeek-V3.2-Speciale!
- Structured medical reasoning responses are efficient and informative, cutting token costs for faster inference!
- Wide-ranging knowledge base: trained on a wide variety of medical disciplines, patient types, and query structures!
- High quality medical responses emphasize performance, brevity, specificity, statistical rationality, and openness.
Get it now:
Guardpoint for Qwen 3 32B: ValiantLabs/Qwen3-32B-Guardpoint
Guardpoint for Qwen 3 14B: ValiantLabs/Qwen3-14B-Guardpoint
Powered by our new structured medical reasoning dataset: sequelbox/Superpotion-DeepSeek-V3.2-Speciale
We've been working hard on Guardpoint; we're really excited to share it with everyone!
We'll be bringing Guardpoint to more models soon, along with further releases for the Shining Valiant and Esper series!
Get our experimental models: https://huggingface.co/collections/sequelbox/experimental-reasoning-models
Get our reasoning datasets: https://huggingface.co/collections/sequelbox/reasoning-datasets
Help support our releases, donations used for our experimental models and datasets: sequelbox/SupportOpenSource
2026 is going to be an amazing year for open source AI! It's time for the AI revolution you need; from the bottom up, built together by all of us.
for love, friendship, and better days,
allegra