PinnedPrompt Caching: Because Who Has Time for Slow AI?Prompt Caching : First introduced by Anthropic , adopted by OpenAI … and LLama to follow soonOct 2Oct 2
PinnedOpenAI’s o1 Model: Paying for AI’s Internal MonologueFuture of AI, where the thoughts are invisible, and meter is alway ON !Sep 13Sep 13
PinnedCalculate : How much GPU Memory you need to serve any LLM ?Just tell me how much GPU Memory do i need to serve my LLM ? Anyone else looking for this answer ? Read On …Jul 116Jul 116
PinnedDead Cheap LLM Inferencing in ProductionA guide for leveraging Spot GPU Instances for Cost-Effective LLM Inference WorkloadsJul 7Jul 7
PinnedPublished inTowards DevRedis-GoLang-React Sample Chat App on OpenShift (Kubernetes)Here is my journal to deploy a sample Redis-GoLang-React Real Time chat application on OpenShiftJan 9, 2022Jan 9, 2022
AI and the Future of Programming: A Developer’s PerspectiveI just got out of the Xtremepython Conference (virtually), where I was part of a panel discussing “The AI Revolution in Coding: Tools…1d ago1d ago
Exposing Local Web-socket connection Securely with FRP & CaddyA step-by-step implementation guideApr 20, 2023Apr 20, 2023