Peer Review
Multi-model peer review layer using local LLMs via Ollama to catch errors in cloud model output. Fan-out critiques to 2-3 local models, aggregate flags, synthesize consensus. Use when: validating trade analyses, reviewing agent output quality, testing local model accuracy, checking any high-stakes Claude output before publishing or acting on it. Don't use when: simple fact-checking (just search the web), tasks that don't benefit from multi-model consensus, time-critical decisions where 60s lat
β οΈ BytesAgain does not review or verify third-party content. Proceed at your own risk.
π This skill is indexed from ClawHub and is available under its original license. BytesAgain is an independent directory β we do not host or own this content. All rights belong to the original author.