ai

Docker

Verified Publisher

Verified Publisher

Docker

San Francisco, CA, USA

Displaying 1 to 30 of 70 repositories

model

Multimodal AI model with 35B MoE architecture for coding agents, reasoning, and vision tasks

2d

10K+

model

1T MoE multimodal agentic model with long-horizon coding, swarm orchestration, and native vision

4d

3.8K

Artifact

119B MoE model with switchable reasoning mode, multimodal vision, and 256k context window

4d

882

model

119B parameter hybrid model with reasoning, vision, and code capabilities (1M token context)

5d

1.1K

model

Multimodal LLM with 35B parameters for coding, agentic tasks, and vision-language understanding

9d

2.9K

model

Gemma 4: multimodal open AI models by Google, optimized for reasoning, coding, and long context.

17d

100K+

25

model

Gemma 4: multimodal open AI models by Google, optimized for reasoning, coding, and long context.

23d

10K+

model

397B MoE model with 17B activation for reasoning, coding, agents, and multimodal understanding

26d

100K+

7

model

397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages

26d

10K+

1

model

Qwen3-Coder is Qwen’s new series of coding agent models.

2m

100K+

26

model

744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)

2m

10K+

3

model

Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging

2m

10K+

1

model

Efficient 80B MoE coding model with 3B activated params, 256K context, and agentic capabilities

2m

50K+

1

model

Image generation model, uses a base latent diffusion model plus a refiner.

3m

10K+

7

model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

3m

10K+

4

model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

3m

10K+

1

model

Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.

3m

10K+

4

model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

4m

6.1K

1

model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

4m

9.4K

2

model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

5m

50K+

2

model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

5m

10K+

1

model

DeepSeek-V3.2 boosts efficiency and reasoning with DSA, scalable RL, agentic data—IMO/IOI wins.

5m

50K+

10

model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

5m

10K+

4

model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

5m

50K+

2

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

5m

10K+

3

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

5m

10K+

model

Snowflake’s Arctic-Embed v2.0 boosts multilingual retrieval and efficiency

6m

5.0K

model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

6m

10K+

1

model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

6m

10K+

model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

6m

100K+

44