(
OPEN Efficient
Language
Models) A family of open source AI small language models from Apple's Machine Learning Research division. Introduced in 2024, OpenELM was designed to work on the user's device rather than the cloud, hence "efficient" in the name. OpenELM uses the transformer architecture and takes advantage of "layer-wise scaling," which adjusts the settings in each layer of the model. OpenELM includes training logs and configurations for pre-training. See
Apple Intelligence.
OpenELM Models
Debuting with four model sizes using .27, .45, 1.08 and 3.04 billion tokens, OpenELM is slightly faster than other open language models while using less than half the pre-training tokens. See
small language model.