Skip to content
View Jieqianyu's full-sized avatar

Block or report Jieqianyu

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results

Hy3 preview (295B A21B), a leading reasoning and agent model in its size, with great cost efficiency

Python 220 8 Updated Apr 23, 2026
Python 73 1 Updated Apr 15, 2026

The agent that grows with you

Python 114,787 16,815 Updated Apr 24, 2026

Official repository for the paper "Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation"

Python 111 9 Updated Mar 18, 2026

JoyAI-Image is the unified multimodal foundation model for image understanding, text-to-image generation, and instruction-guided image editing.

Python 1,951 145 Updated Apr 15, 2026

The repo is finally unlocked. enjoy the party! The fastest repo in history to surpass 100K stars ⭐. Join Discord: https://discord.gg/5TUQKqFWd Built in Rust using oh-my-codex.

Rust 188,090 109,289 Updated Apr 24, 2026

Supercharge your AI agents by versioning, tracking, and merging overlapping skills.

Shell 36 Updated Apr 9, 2026

OpenClaw-RL: Train any agent simply by talking

Python 5,120 544 Updated Apr 21, 2026

Causal video-action world model for generalist robot control

Python 1,070 78 Updated Apr 24, 2026

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

TypeScript 363,342 74,311 Updated Apr 24, 2026

Official implementation of "SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from Experience"

Python 239 25 Updated Aug 7, 2025

Think Before You Move: Latent Motion Reasoning for Text-to-Motion Generation

Python 16 Updated Jan 4, 2026

💻 A curated list of papers and resources for multi-modal Graphical User Interface (GUI) agents.

1,171 71 Updated Aug 17, 2025

[ICLR 2026] The official repository for paper "ThinkMorph: Emergent Properties in Multimodal Interleaved Chain-of-Thought Reasoning"

Jupyter Notebook 178 6 Updated Jan 26, 2026

Open-source unified multimodal model

Python 5,863 520 Updated Oct 27, 2025

Co-Reinforcement Learning for Unified Multimodal Understanding and Generation

Python 45 5 Updated Jul 22, 2025

RynnVLA-002: A Unified Vision-Language-Action and World Model

Python 1,002 60 Updated Dec 2, 2025

The Agent’s First Day: Benchmarking Learning, Exploration, and Scheduling in the Workplace Scenarios

Python 9 1 Updated Jan 19, 2026

✨✨ [ICLR 2026] R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning

Python 288 22 Updated May 9, 2025

[NeurIPS'24 Spotlight] GAIA: Rethinking Action Quality Assessment for AI-Generated Videos

41 Updated Apr 1, 2025

PointWorld: Scaling 3D World Models for In-The-Wild Robotic Manipulation

418 9 Updated Mar 11, 2026

Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)

Python 630 43 Updated Dec 30, 2024

[ECCV 2024] Official code implementation of Vary: Scaling Up the Vision Vocabulary of Large Vision Language Models.

Python 1,893 145 Updated Dec 30, 2024

An official implementation of DanceGRPO: Unleashing GRPO on Visual Generation

Python 1,587 78 Updated Oct 16, 2025

A unified inference and post-training framework for accelerated video generation.

Python 3,418 321 Updated Apr 24, 2026

SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer

Python 5,108 345 Updated Apr 14, 2026

Dream-VL and Dream-VLA, a diffusion VLM and a diffusion VLA.

Python 113 4 Updated Jan 14, 2026

[CVPR 2026 Highlight] NeoVerse: Enhancing 4D World Model with in-the-wild Monocular Videos

Python 537 25 Updated Apr 13, 2026

A construction kit for reinforcement learning environment management.

Python 422 57 Updated Apr 24, 2026
Next