LLM/VLM Compression Foundations
2026-05-10
A working notebook on compressing LLMs and VLMs — why overparameterization is the precondition, how pruning, quantization, and distillation interact, why P-KD-Q ordering dominates, and where compression breaks.
pruningquantizationknowledge-distillationtoken-compressionvision-language-modelsneural-architecture-searchhardware-aware-ml
AI时代,个体何以为家?
2026-03-26
在AI时代的浪潮中,个体价值面临前所未有的挑战。本文深入探讨了技术变革对社会结构和个人价值的影响,并提出了在不确定性中锚定自我的行动纲领,呼吁个体积极介入,塑造一个以人类福祉为核心的未来。
Hugo Qucik Introduction
2026-03-10
First post with Hugo