← Registry

evaluating-code-models

Community

Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing coding abilities, testing multi-language support, or measuring code generation quality. Industry standard from BigCode Project used by HuggingFace leaderboards.

Install

skillpm install evaluating-code-models

Format score

95/100

Spec

v1.0

Installs

0

Published

April 1, 2026