[{"content":"","date":"24 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/","section":"","summary":"","title":"","type":"page"},{"content":"I\u0026rsquo;m Carlos Prados, a Telecommunications Engineer with 25+ years of experience at the intersection of architecture, performance, and execution. I\u0026rsquo;m a hands-on CTO and Co-founder who still writes the core production code — because the person who sets the architecture should feel its friction every day.\nToday I wear three hats in parallel. I\u0026rsquo;m CTO, board member and Innovation Lead at Amplía Soluciones, where we build IIoT platforms for utilities, industry, and telecom. I\u0026rsquo;m also co-founder of de Prados y Bodega and a small portfolio of other ventures. And, selectively, I work as a technical advisor to CTOs, boards, and investors — only on things I can actually operate on.\nMy day-to-day spans Go, Python, and a deep ecosystem of Java, JavaScript, and — increasingly — Rust. Lately I\u0026rsquo;m obsessed with agentic AI: autonomous, distributed agents built with MCP, RAG, and the hard problem of turning research into production. I also lead EU and Spanish R\u0026amp;D innovation programs at Amplía.\n","date":"24 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/about/","section":"","summary":"I’m Carlos Prados, a Telecommunications Engineer with 25+ years of experience at the intersection of architecture, performance, and execution. I’m a hands-on CTO and Co-founder who still writes the core production code — because the person who sets the architecture should feel its friction every day.\nToday I wear three hats in parallel. I’m CTO, board member and Innovation Lead at Amplía Soluciones, where we build IIoT platforms for utilities, industry, and telecom. I’m also co-founder of de Prados y Bodega and a small portfolio of other ventures. And, selectively, I work as a technical advisor to CTOs, boards, and investors — only on things I can actually operate on.\n","title":"About","type":"about"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/architecture/","section":"Tags","summary":"","title":"Architecture","type":"tags"},{"content":" Fleet Health Is the Product # The industry sells IoT as if data were the point. It isn\u0026rsquo;t. Data is the receipt. Fleet health is the product.\nA thousand sensors in the desert, a gas meter in every flat of a rural town, a cold-chain tracker strapped to a pallet crossing a border — none of that is worth anything if the fleet degrades between my laptop and the real world. Every device that silently goes dark, every firmware bug that bakes in without a way out, every slot of flash with a bricked image — each of those is a direct cut into whatever value the product was supposed to deliver.\nThis is why, in any serious IoT project, the single most important capability you ship is the ability to ship again. Remote firmware updates are not a feature — they are the lifeline that decides whether the fleet stays in service for ten years, or whether you end up flying a technician to a mountain hut to reflash a 4 MB SoC.\nSo when I sat down to build a new OTA system for constrained fleets, the first design principle was simple: build for the field, not for the lab.\nThe Reality of NB-IoT # Most OTA stacks you find in the wild were designed for well-fed Linux servers. Drag them onto a battery-powered NB-IoT radio waking once per hour and the assumptions collapse. Constrained radios are their own planet:\nReality of NB-IoT Design consequence Downlink ~20 kbps, capped at tens of KB per day bsdiff + zstd deltas; verify signatures before downloading MTU 1 280 bytes, packet loss common CoAP Block2 transfer at 512-byte blocks, HTTP Range resume Device wakes for seconds a day Asynchronous delta generation on the server, short retry windows Battery budget measured in years Heartbeat-only agent, idle between cycles, no open TCP sockets Operator may be 1 000 km away, truck roll takes days A/B slots with atomic rollback, watchdog with boot-count safety net Every design decision in the project traces back to one of those rows. If it doesn\u0026rsquo;t, it shouldn\u0026rsquo;t be in the code.\nWhen Keystone Doesn\u0026rsquo;t Fit # A couple of months ago I wrote about Keystone, a Go-based edge orchestrator I built to replace AWS Greengrass. Keystone runs multiple components, manages processes and containers, and idles around ~23 MB of RAM on a gateway-class device.\nThat works on a Raspberry Pi. It does not work on an NB-IoT module whose total RAM is 64 MB and whose job is to run one binary that reads a sensor, talks to a backend, and goes back to sleep.\nDifferent scope, not a replacement. When the device hosts a complete edge runtime, use Keystone. When the device is a single binary and all you need is to swap that binary safely, you want an agent footprint closer to 5 MB than 25 MB, no container runtime, no component graph, no orchestration overhead. For those devices, a tight OTA-only agent is not just sufficient — it is obligatory. Anything heavier and you are eating into the exact RAM and flash that the device needs to do its actual job.\nThat is where ota-updater comes in.\nBuild for the Field, Not for the Lab # Most of the hard decisions in this project are not about what the system can do when the network is up and the battery is fresh. They are about what happens when everything that can go wrong, goes wrong, and the device is still expected to serve.\nA non-exhaustive list of things the agent must survive without human intervention:\nA delta download that cuts off at 83 %. Network drops back up. Resume from byte 83 %, not from zero. A new binary that panics on startup before its first heartbeat. Boot-count exceeds the budget. Permanent rollback to the last known-good slot. Failure reported upstream. A corrupt delta the server shouldn\u0026rsquo;t have served. Signature verification before the download refuses to spend a single downlink byte on it. A power cut halfway through writing the inactive slot. Atomic rename plus fsync(dir) guarantees neither the old binary nor the half-written new one is corrupted on boot. A configuration mistake that points a thousand devices at a broken version. Canary the rollout, watch updater_heartbeats_total{result=\u0026quot;fail\u0026quot;}, roll back the target before the whole fleet trips. In NB-IoT, every byte is expensive and every reboot is a liability. Design follows.\nArchitecture at a Glance # Two binaries, two transports, one signed payload:\n┌──────────────────────────┐ ┌──────────────────────────┐ │ update-server │ │ edge-agent │ │ (cloud or on-prem) │ │ (on the device) │ │ │ │ │ │ • signed manifests │ HTTP/CoAP │ • heartbeat cycle │ │ • bsdiff+zstd deltas │ ───────────▶ │ • signature verify │ │ • LRU hot cache │ JSON/CBOR │ • delta download │ │ • fsnotify target │ │ • A/B slot swap │ │ • Prometheus /metrics │ │ • watchdog + rollback │ └──────────────────────────┘ └──────────────────────────┘ The server is a stateless HTTP + CoAP endpoint in front of a directory of binaries. It computes deltas on demand, caches them in bounded memory, and signs every manifest with Ed25519. The agent is a single Go binary — or, as we\u0026rsquo;ll see, a Go library — that heartbeats, verifies, downloads, patches, and self-replaces. No message broker, no job queue, no database; nothing that can be down when a device wakes up at 3 AM looking for orders.\nProtect the Downlink, Not Just the Binary # The usual OTA signature scheme looks like this: sign the hash of the final target binary, ship a delta to the device, let the device apply the delta, then verify the result matches the signed hash. It works, but it pays the cost of the delta download before finding out the delta was tampered with.\nOn a radio where the monthly data budget is measured in hundreds of kilobytes, that is indefensible.\nThe scheme I ended up with (documented exhaustively in docs/signing.md) signs the pair targetHash || deltaHash:\n// ManifestSigningPayload builds the exact bytes that go under Ed25519. // Any change in either hash breaks the signature. func ManifestSigningPayload(targetHash, deltaHash []byte) []byte { buf := make([]byte, 0, len(targetHash)+len(deltaHash)) buf = append(buf, targetHash...) buf = append(buf, deltaHash...) return buf } Cost: one Ed25519 signature per (from, to) pair. Marginal with Ed25519.\nBenefit: the agent verifies the signature against the exact delta bytes it is about to download, not against the result it will have after spending battery and downlink. A tampered delta is rejected with zero bytes transferred. A corrupt server response is rejected with zero bytes transferred. Only after the patch succeeds and the reconstructed binary matches targetHash does the device commit the swap.\nIt is a small design choice with an outsized impact on cost-per-update in a fleet of thousands.\nA/B Slots and In-Place Self-Update # A/B slot systems are the boring, correct answer to \u0026ldquo;what if the new binary is broken\u0026rdquo;. This project uses two slots and an atomic symlink:\n/var/lib/ota-agent/slots/ ├── A/edge-app # previous version ├── B/edge-app # new version, being written └── current -\u0026gt; A # atomic symlink; swap is a rename Write the new binary into the inactive slot, fsync, flip the symlink via atomic rename, then transfer the running process to the new binary. The last step is the one most implementations get clumsy about: they fork a new process, lose the PID, and let the supervisor pick up the pieces.\nI went the other way. The agent\u0026rsquo;s default RestartStrategy is syscall.Exec, which invokes execve(2) on the new binary from inside the current process. The kernel replaces the process image in place — same PID, same PPID, same cgroup, same open file descriptors, same terminal — but running different code. To systemd, the service never restarted. To Docker, PID 1 never changed. To an interactive shell running the agent by hand, the process kept writing logs to the same terminal from the same PID.\nWrap that in a watchdog with N=3 heartbeat retries within a configurable window, a persistent boot counter that triggers permanent rollback after two failed boots (MaxBoots=2), and all on-disk writes guarded by fsync(file) + rename + fsync(dir) — and you have an update process that survives power cuts, bad builds, and transient NB-IoT weather. The details are in the README; the point is that none of it is incidental.\nEmbeddable by Design # The most unusual decision in this project is that the agent is not primarily a binary. It is a Go library.\nA real fleet rarely runs \u0026ldquo;just the updater\u0026rdquo; on a device. It runs your real workload — a telemetry client, a gateway, a payment app — and the updater is a passenger. Shipping two separate binaries means two processes to supervise, two sets of logs, two health models, and twice the complexity. So the agent lives in pkg/agent, with public types, an injectable logger, pluggable hooks (HealthChecker, RestartStrategy, HWInfoFunc) and no globals.\nEmbedding it into your own binary is a handful of lines:\nupdater := agent.NewUpdater(agent.UpdaterConfig{ ServerURL: \u0026#34;https://updates.example.com\u0026#34;, DeviceID: \u0026#34;sensor-0042\u0026#34;, PublicKey: pubKey, SlotManager: agent.NewSlotManager(\u0026#34;/var/lib/myapp/slots\u0026#34;), BootCounter: agent.NewBootCounter(\u0026#34;/var/lib/myapp/slots/.boot_count\u0026#34;), Watchdog: watchdog, Primary: httpClient, Logger: slog.Default(), }) go updater.Run(ctx) Your application keeps doing its job. The updater heartbeats in the background, downloads and verifies deltas, and — when the time comes — syscall.Execs your own binary into its next version. Same PID, same sockets, same everything except the code.\nConclusion: Fleet Health Is the Product # The first time you watch a device in a different country update itself over a 20 kbps radio, reboot into a new version, and check itself healthy — all without human intervention — is when you understand what this code is actually for. Not a feature. Not a convenience. The difference between a fleet that keeps delivering value for a decade and one that slowly rots into support tickets.\nBuild for the field, not the lab. Fleet health is the product.\nThe code lives at carlosprados/ota-updater. Ed25519-signed delta patches, HTTP + CoAP transports, A/B slots, watchdog, rollback, atomic writes, Prometheus metrics, pprof, and a full step-by-step demo (with a companion Bruno API collection) for anyone who wants to feel the thing swap under their own hands before trusting it with their own fleet. Pull requests welcome — field reports even more so.\n","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/build-for-the-field/","section":"Posts","summary":"Fleet Health Is the Product # The industry sells IoT as if data were the point. It isn’t. Data is the receipt. Fleet health is the product.\nA thousand sensors in the desert, a gas meter in every flat of a rural town, a cold-chain tracker strapped to a pallet crossing a border — none of that is worth anything if the fleet degrades between my laptop and the real world. Every device that silently goes dark, every firmware bug that bakes in without a way out, every slot of flash with a bricked image — each of those is a direct cut into whatever value the product was supposed to deliver.\n","title":"Build for the Field, Not the Lab: Shipping OTA Updates to NB-IoT Devices","type":"posts"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/categories/engineering/","section":"Categories","summary":"","title":"Engineering","type":"categories"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/firmware-update/","section":"Tags","summary":"","title":"Firmware-Update","type":"tags"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/golang/","section":"Tags","summary":"","title":"Golang","type":"tags"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/iot/","section":"Tags","summary":"","title":"Iot","type":"tags"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/nb-iot/","section":"Tags","summary":"","title":"Nb-Iot","type":"tags"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/open-source/","section":"Tags","summary":"","title":"Open-Source","type":"tags"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/ota/","section":"Tags","summary":"","title":"Ota","type":"tags"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/categories/projects/","section":"Categories","summary":"","title":"Projects","type":"categories"},{"content":"","date":"19 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/agentic-ai/","section":"Tags","summary":"","title":"Agentic AI","type":"tags"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/ai/","section":"Tags","summary":"","title":"AI","type":"tags"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/categories/artificial-intelligence/","section":"Categories","summary":"","title":"Artificial Intelligence","type":"categories"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/design-patterns/","section":"Tags","summary":"","title":"Design Patterns","type":"tags"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/categories/development/","section":"Categories","summary":"","title":"Development","type":"categories"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/langchain/","section":"Tags","summary":"","title":"LangChain","type":"tags"},{"content":"In my previous post, we explored Prompt Chaining — the simplest way to break a complex task into sequential steps. But real-world systems rarely follow a straight line. Users don\u0026rsquo;t come with labels on their foreheads telling you what they need. Sometimes they want to book a flight, sometimes they want a factual answer, sometimes they want something you didn\u0026rsquo;t even anticipate.\nThat\u0026rsquo;s where the Routing pattern comes in. And honestly, this is where things start to feel like you\u0026rsquo;re building something real.\nPattern #2: Routing # The Problem # Imagine you\u0026rsquo;re building a customer service system. A user types: \u0026ldquo;Book me a flight to London.\u0026rdquo; Another one types: \u0026ldquo;What\u0026rsquo;s the capital of Italy?\u0026rdquo; If you throw both prompts at the same generic LLM pipeline, you get generic results. No specialization, no efficiency, no structure.\nWhat you actually need is a coordinator — something that reads the request, understands the intent, and sends it to the right handler. A booking request goes to the booking specialist. A factual question goes to the information agent. An unclear request gets flagged for clarification.\nThis is routing. It\u0026rsquo;s the AI equivalent of the receptionist at a company who listens to your question and says \u0026ldquo;Let me transfer you to the right department.\u0026rdquo;\nThe Solution # The Routing pattern introduces a classification step before any real work happens. The LLM first decides what kind of request it\u0026rsquo;s looking at, then delegates to the appropriate specialist pipeline. Two main approaches:\n1. LCEL Chain Routing — The lightweight approach. A router chain classifies the intent and a RunnableLambda dispatches to the right handler function:\nrouter_chain = coordinator_prompt | llm | StrOutputParser() full_chain = ( {\u0026#34;decision\u0026#34;: router_chain, \u0026#34;request\u0026#34;: RunnablePassthrough()} | RunnableLambda(route_to_handler) ) The LLM outputs a single word — booker, info, or unclear — and the lambda function routes accordingly. Simple, effective, and surprisingly powerful for most use cases.\n2. LangGraph StateGraph Routing — The structured approach. When you need more control, visibility, and the ability to add complexity later, you model routing as a graph with conditional edges:\nbuilder = StateGraph(RouterState) builder.add_node(\u0026#34;classify_intent\u0026#34;, classify_intent) builder.add_node(\u0026#34;booking_node\u0026#34;, booking_node) builder.add_node(\u0026#34;info_node\u0026#34;, info_node) builder.add_edge(START, \u0026#34;classify_intent\u0026#34;) builder.add_conditional_edges( \u0026#34;classify_intent\u0026#34;, route_by_intent, {\u0026#34;booking_node\u0026#34;: \u0026#34;booking_node\u0026#34;, \u0026#34;info_node\u0026#34;: \u0026#34;info_node\u0026#34;} ) Each specialist is a full node in the graph — it can have its own LLM, its own prompt, its own tools. The coordinator classifies, and the graph\u0026rsquo;s conditional edges do the dispatching. This is the approach I prefer for anything beyond a toy example, because it scales naturally: need a new specialist? Add a node and an edge. Done.\nWhy This Matters # Routing is deceptively fundamental. Every non-trivial agentic system has routing somewhere, whether it\u0026rsquo;s explicit or buried inside a framework\u0026rsquo;s abstractions. When you understand the pattern, you see it everywhere:\nCustomer support bots that escalate to a human or resolve automatically. Multi-model systems that send simple queries to a fast/cheap model and complex ones to a powerful/expensive model. Tool-using agents that decide which tool to call before calling it. The key insight is that the LLM itself is the router. You don\u0026rsquo;t need a separate rules engine or a decision tree. You just ask the model to classify, and then you act on the classification. It\u0026rsquo;s LLMs all the way down.\nThe Bigger Picture # I\u0026rsquo;ve been working intensely on agentic AI systems — both in my role as CTO and as a personal deep-dive into what I believe is the most impactful shift in software architecture in years. Understanding these foundational patterns is not optional if you want to build solutions that actually work in production. The gap between \u0026ldquo;I called an API and got a response\u0026rdquo; and \u0026ldquo;I built an intelligent system that reasons, delegates, and adapts\u0026rdquo; is exactly these patterns.\nI\u0026rsquo;m documenting my learning through this series, following Antonio Gulli\u0026rsquo;s excellent Agentic Design Patterns book. Full credit goes to him for the concepts — I\u0026rsquo;m refactoring the examples into pure Python with LangGraph, making them runnable and testable.\nAll the code from this post (and every other chapter) is available in my repository: carlosprados/Agentic_Design_Patterns. Every example runs with uv run, supports both Google Gemini and local models via Ollama, and is ready to experiment with.\nWhat\u0026rsquo;s Next # In the next post, we\u0026rsquo;ll tackle Parallelization — how to run multiple agents simultaneously and merge their results. Think of it as fan-out/fan-in for LLMs. It\u0026rsquo;s where things start to get fast.\nStay tuned.\n","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/agentic-ai-routing/","section":"Posts","summary":"In my previous post, we explored Prompt Chaining — the simplest way to break a complex task into sequential steps. But real-world systems rarely follow a straight line. Users don’t come with labels on their foreheads telling you what they need. Sometimes they want to book a flight, sometimes they want a factual answer, sometimes they want something you didn’t even anticipate.\nThat’s where the Routing pattern comes in. And honestly, this is where things start to feel like you’re building something real.\n","title":"Mastering Agentic AI: The Routing Pattern","type":"posts"},{"content":"","date":"12 April 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/python/","section":"Tags","summary":"","title":"Python","type":"tags"},{"content":"","date":"16 February 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/aws/","section":"Tags","summary":"","title":"Aws","type":"tags"},{"content":" The Vendor Lock-in Trap # For the better part of the last decade, the industry narrative around IoT has been dominated by a \u0026ldquo;Cloud-First\u0026rdquo; mentality. The premise was seductive: treat your edge devices as dumb pipes and offload the heavy lifting to the cloud\u0026rsquo;s infinite compute.\nHowever, as we moved from prototypes to production at scale, cracks began to appear in this facade. Latency mattered. Bandwidth costs exploded. But most importantly, we realized that by adopting managed edge runtimes, we were walking into a trap of architectural dependency.\nThis is the story of Keystone, an open-source project born from the necessity to reclaim control over the edge. It is a lightweight edge orchestration agent written in Go, designed to deploy and manage components on devices — processes by default, containers when needed — with the simplicity and freedom that solutions like AWS IoT Greengrass promised but failed to deliver.\nAWS IoT Greengrass is an impressive piece of engineering, but it suffers from a critical flaw common to hyperscaler tools: it is designed to increase your consumption of their cloud.\nWhen you build on Greengrass, you aren\u0026rsquo;t just writing code; you are adopting a proprietary mental model:\nLambda-on-Edge: You are forced to package logic as AWS Lambda functions, coupling your business logic to AWS-specific signatures. IPC Dependency: Inter-process communication relies on proprietary Greengrass IPC, making your components non-portable. Opaque Runtime: The \u0026ldquo;Nucleus\u0026rdquo; (the Java-based core of Greengrass V2) is a resource-heavy black box. Debugging a memory leak in the JVM on a constrained device is not how I want to spend my weekends. We needed a solution that treated the cloud as a destination, not a dependency.\nThe Philosophy: Edge Sovereignty # Keystone is built on a divergent philosophy: the device is sovereign. It owns its component lifecycle, manages its own state, and keeps running regardless of connectivity. Cloud connectivity is a feature, not a requirement for operation. If the network goes down, Keystone keeps its components running, and pending deployment jobs queue up locally via JetStream until the connection recovers.\nWhere Greengrass tries to be a cloud runtime pushed to the edge, Keystone is a local-first orchestrator that optionally takes orders from the cloud.\nWhy Go? # The decision to build Keystone in Go was strategic. Having architected systems in Java, Python, and C++, Go offered the perfect intersection of performance and developer experience for the edge.\nThe Power of the Static Binary # Dependency management on embedded Linux is a nightmare. Python environments break with OS updates; Java requires a heavy JVM installation.\nGo compiles to a single, static binary. We can drop the 21 MB Keystone executable onto a Raspberry Pi, a ruggedized industrial gateway, or a server, and it just works. No pip install, no JVM tuning, no missing shared libraries.\nConcurrency is Native # Edge orchestration is inherently asynchronous. An agent might be starting a process, monitoring health checks on three running components, downloading an artifact with resume, and processing a deployment command from NATS — all simultaneously.\nGo\u0026rsquo;s Goroutines and Channels allow Keystone to handle all of this concurrently with a fraction of the memory footprint of a thread-based Java application.\nMetric AWS Greengrass (Java) Keystone (Go) Idle RAM ~100 MB+ ~23 MB Cold Boot 8-15 seconds \u0026lt; 0.5 seconds Deployment Complex artifact bundle Single Binary Keystone Architecture # Keystone follows a Ports and Adapters pattern. The core orchestration logic is completely decoupled from how you talk to the agent.\nThe Core: Recipes, Plans, and a Supervisor # Instead of Lambda functions and proprietary IPC, Keystone uses simple TOML recipes to describe components, and deployment plans to declare the desired state of a device.\nA recipe defines what to run and how to keep it healthy:\n[metadata] name = \u0026#34;com.example.api\u0026#34; version = \u0026#34;1.0.0\u0026#34; [[artifacts]] uri = \u0026#34;https://artifacts.internal/api-server.tar.gz\u0026#34; sha256 = \u0026#34;a1b2c3...\u0026#34; unpack = true [lifecycle.run.exec] command = \u0026#34;./api-server\u0026#34; args = [\u0026#34;--port\u0026#34;, \u0026#34;9090\u0026#34;] [lifecycle.run.health] check = \u0026#34;http://127.0.0.1:9090/healthz\u0026#34; interval = \u0026#34;10s\u0026#34; A plan lists the components you want running. Apply it and Keystone resolves the dependency graph, starts components layer by layer in parallel, monitors their health, and restarts them according to their restart policy.\nUnder the hood, a Supervisor manages each component through a state machine (none → installing → starting → running → stopping → stopped/failed), building a DAG from declared dependencies and starting independent components in parallel:\n// StartStack starts components respecting the dependency DAG. func StartStack(ctx context.Context, comps []*Component) error { g := BuildGraph(comps) layers, err := g.TopoLayers() if err != nil { return err // cycle detected } for _, layer := range layers { var wg sync.WaitGroup errCh := make(chan error, len(layer)) for _, name := range layer { comp := g.Nodes[name] wg.Add(1) go func(c *Component) { defer wg.Done() if err := c.Install(ctx); err != nil { errCh \u0026lt;- err return } if err := c.Start(ctx); err != nil { errCh \u0026lt;- err } }(comp) } wg.Wait() // handle errors, rollback if needed... } return nil } Runners: Processes First, Containers When Needed # By default, Keystone uses a ProcessRunner that spawns native processes with proper signal handling (process groups), log streaming, and health probes (HTTP, TCP, or shell command). When you need containers, a ContainerRunner uses containerd natively, with automatic fallback to Docker, nerdctl, or Podman CLI.\nYou choose per-recipe: set type = \u0026quot;container\u0026quot; in the lifecycle and Keystone handles image pulling, mounts, port mappings, and resource limits.\nControl Plane Adapters: Your Cloud, Your Choice # This is where we break the lock-in. The CommandHandler interface defines all operations the agent supports (apply plan, stop, restart, health). Each transport adapter just translates the wire protocol:\nHTTP: REST API for local management and Prometheus metrics (default). NATS: Pub/Sub with optional JetStream for durable, offline-capable job queues. MQTT: IoT-native messaging for AWS IoT Core, Mosquitto, EMQX, or any standard broker. All three can run simultaneously. Switching from NATS to MQTT, or adding HTTP alongside either, is a matter of CLI flags — no code change, no recompile.\n# HTTP + NATS + MQTT simultaneously ./keystone --http :8080 \\ --nats-url nats://nats.internal:4222 --nats-device-id edge-001 \\ --mqtt-broker tcp://mqtt.internal:1883 --mqtt-device-id edge-001 Conclusion: Freedom is an Architecture Choice # We didn\u0026rsquo;t build Keystone to compete with AWS Greengrass\u0026rsquo;s feature set. We built it to compete with its philosophy.\nBy choosing Go, we gained performance and operational simplicity. By choosing pluggable adapters over cloud-native coupling, we gained the freedom to swap out underlying technologies as the market evolves. By choosing TOML recipes over proprietary deployment formats, we made edge orchestration something any team can understand in minutes.\nIf you are tired of the complexity of managed edge runtimes and want an agent that puts you back in the driver\u0026rsquo;s seat, check out the project on GitHub: carlosprados/keystone.\n","date":"16 February 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/why-keystone/","section":"Posts","summary":"The Vendor Lock-in Trap # For the better part of the last decade, the industry narrative around IoT has been dominated by a “Cloud-First” mentality. The premise was seductive: treat your edge devices as dumb pipes and offload the heavy lifting to the cloud’s infinite compute.\nHowever, as we moved from prototypes to production at scale, cracks began to appear in this facade. Latency mattered. Bandwidth costs exploded. But most importantly, we realized that by adopting managed edge runtimes, we were walking into a trap of architectural dependency.\n","title":"Breaking the Chains: Why I Built Keystone to Replace AWS Greengrass","type":"posts"},{"content":"","date":"16 February 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/edge-computing/","section":"Tags","summary":"","title":"Edge-Computing","type":"tags"},{"content":"We are witnessing a massive shift in how we interact with Artificial Intelligence. We are moving from simple, reactive chatbots to Agentic Systems—autonomous entities capable of reasoning, planning, and interacting with the world to achieve complex goals.\nBut how do we actually build these systems? It\u0026rsquo;s not just about having a powerful LLM; it\u0026rsquo;s about the architecture around it. It requires structure, design, and a thoughtful approach to how the agent perceives and acts.\nI recently started diving deep into the book \u0026ldquo;Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems\u0026rdquo; by Antonio Gulli (LinkedIn). This book is a fantastic resource that extracts key architectural blueprints for building AI agents.\nTo really understand these concepts, I decided to get my hands dirty. I am launching a new series of blog posts where I will explore these patterns one by one. Full credit goes to Antonio Gulli for defining these patterns; my goal here is simply to document my learning journey and share the practical implementation of his ideas.\nThe Project: Agentic Design Patterns on GitHub # Reading code in a PDF is one thing; running it is another.\nI have started a new open-source project to accompany this series. I am taking the concepts and code from Antonio\u0026rsquo;s book and converting them into a clean, executable, and easy-to-navigate repository.\nYou can follow along, star the repo, and try the code yourself here:\n👉 GitHub: carlosprados/Agentic_Design_Patterns\nMy goal is to provide a \u0026ldquo;canvas\u0026rdquo; for developers—a practical foundation where you can see these patterns in action using frameworks like LangChain and Google\u0026rsquo;s Agent Developer Kit.\nPattern #1: Prompt Chaining # Let\u0026rsquo;s start at the beginning. As described in Chapter 1 of the book, the most foundational pattern in the Agentic world is Prompt Chaining (sometimes called the Pipeline pattern).\nThe Problem: The Monolithic Prompt # When we first start using LLMs, the tendency is to stuff everything into one massive prompt. We ask the model to \u0026ldquo;Read this 20-page report, extract the dates, summarize the key findings, check for errors, and format it as a JSON object.\u0026rdquo;\nThis often leads to failure. The model gets overwhelmed. It might hallucinate, forget instructions (\u0026ldquo;instruction neglect\u0026rdquo;), or mix up the formatting. The cognitive load is simply too high for a single inference step.\nThe Solution: Divide and Conquer # Prompt Chaining solves this by decomposing a complex task into a sequence of smaller, manageable sub-tasks.\nInstead of one giant leap, we take structured steps:\nStep 1: Extract text from the document. Step 2: Summarize that text. Step 3: Format the summary into JSON. The output of one step becomes the input for the next. This creates a dependency chain where the context and results of previous operations guide the subsequent processing.\nWhy this matters # By breaking the chain, you gain several advantages:\nReliability: Each step is simpler, reducing the chance of error. Debuggability: If the output is wrong, you know exactly which link in the chain failed. Focus: You can use different system prompts (or even different models!) for different steps. You might use a cheap, fast model for formatting and a \u0026ldquo;smart\u0026rdquo; model for reasoning. Seeing it in Action # In the GitHub repository, I\u0026rsquo;ve implemented a classic example of this pattern based on the book\u0026rsquo;s guidance.\nThe code demonstrates a pipeline that takes raw, unstructured text (like a technical description of a laptop) and passes it through a chain to:\nExtract specific technical specifications. Transform and sanitize that data. Format it into a clean JSON structure ready for a database. Here is a snippet of the logic using LangChain (check the repo for the full runnable source):\n# A conceptual look at the chain structure extraction_chain = prompt_extract | llm | StrOutputParser() # The output of extraction feeds into the transformation full_chain = ( {\u0026#34;specifications\u0026#34;: extraction_chain} | prompt_transform | llm | StrOutputParser() ) This modularity is the building block of more complex behaviors. Once you master chaining, you can start building agents that don\u0026rsquo;t just follow a straight line, but can decide which chain to use.\nWhat\u0026rsquo;s Next? Prompt Chaining is just the tip of the iceberg. In the next post, we will explore Routing (Chapter 2)—giving our agents the ability to make decisions and choose different paths based on the user\u0026rsquo;s intent.\nMake sure to check out the repository, clone it, and try running the Chapter 1 examples.\nSpecial thanks to Antonio Gulli for his inspiring work.\n","date":"14 January 2026","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/agentic-ai-prompt-chaining/","section":"Posts","summary":"We are witnessing a massive shift in how we interact with Artificial Intelligence. We are moving from simple, reactive chatbots to Agentic Systems—autonomous entities capable of reasoning, planning, and interacting with the world to achieve complex goals.\nBut how do we actually build these systems? It’s not just about having a powerful LLM; it’s about the architecture around it. It requires structure, design, and a thoughtful approach to how the agent perceives and acts.\n","title":"Mastering Agentic AI: The Prompt Chaining Pattern","type":"posts"},{"content":"","date":"17 March 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/authors/admin/","section":"Authors","summary":"","title":"Admin","type":"authors"},{"content":"","date":"17 March 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"Master class at ETSIT - Escuela Técnica Superior de Ingenieros de Telecomunicación on the evolution of telemetry and telecontrol up to the era of Digital Transformation and IoT.\nPractical examples of IoT use in the industrial sector:\nHow to monitor and continuously improve the performance of the smart grid in the electrical sector: medium voltage switchgear, transformer substations. Building sites rethought using industry 4.0 standards: a new way to find health and safety solutions in building sites bringing sensors to workers through wearables for occupational risk prevention and monitoring of high-value assets. How gas and electricity utilities can manage smart meters. Study how to track vehicles and assets using IoT. ","date":"17 March 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/talks/etsit-iot/","section":"Last Talks","summary":"The evolution of telemetry and telecontrol up to Internet of Things","title":"De la Telemetría y el Telecontrol al Internet de la Cosas Industrial aplicado a la eficiencia energética","type":"talks"},{"content":"","date":"17 March 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/device-management/","section":"Tags","summary":"","title":"Device-Management","type":"tags"},{"content":"","date":"17 March 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/talks/","section":"Last Talks","summary":"","title":"Last Talks","type":"talks"},{"content":"","date":"1 January 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/amplia/","section":"Tags","summary":"","title":"Amplia","type":"tags"},{"content":"I\u0026rsquo;m very proud of the OpenGate IoT platform . It\u0026rsquo;s my main focus of attention in amplía))) since 2002.\nI\u0026rsquo;m managing the team behind OpenGate and architecting the platform with the help of an excellent team of professionals who make the experience of developing OpenGate an absolute pleasure.\nOpenGate is an IoT platform helping deliver remote data to business processes and controlling all the Internet of Things infrastructure, including sensors, assets, devices, SIM cards, and lines a company can use in their Digital Transformation projects.\n","date":"1 January 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/amplia-opengate/","section":"Projects","summary":"OpenGate IoT Platform allows obtaining general status and performance of remote devices, machines, and other items worldwide through a friendly and centralized administration web console","title":"OpenGate IoT Platform","type":"projects"},{"content":"","date":"1 January 2022","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","date":"4 November 2021","externalUrl":null,"permalink":"https://carlos.enredando.me/authors/carlos-prados/","section":"Authors","summary":"","title":"Carlos Prados","type":"authors"},{"content":" Harnessing IoT and AI for Snow Avalanche Prediction: A Pioneering Project by Amplía Soluciones # In the realm of natural disaster prediction and management, the innovative MEWS project stands out as a beacon of technological advancement and practical application. At the heart of this initiative is Amplía Soluciones, under the leadership of our team, which has played a pivotal role in developing the project\u0026rsquo;s artificial intelligence inference engine.\nIntegrating Advanced Big Data and IoT Capabilities # Our journey began with enhancing the big data capabilities of our IoT platform, OpenGate. This robust platform was intricately designed to ingest and prepare vast amounts of data, paving the way for data scientists from Beia Consult International to perform their analytical magic. The collaboration between our teams was instrumental in harnessing the data effectively for predictive analytics.\nThe LoRaWAN Stack Integration # A significant milestone in our project was integrating a full-fledged LoRaWAN stack into OpenGate. This integration allowed us to seamlessly receive, normalize, and process the data from remote sensors and gateways. This advancement was not just about technology integration; it was about creating a reliable and efficient pipeline for data to flow from the most remote sensors right into our central analytics system.\nDeveloping the AI Inference Engine # The AI inference engine, a brainchild of our team at Amplía Soluciones, stands as the project\u0026rsquo;s crown jewel. This engine is the core that makes predictions about potential snow avalanches. By analyzing the data fed into it, the engine can identify patterns and indicators that signify a high risk of avalanches, thus enabling timely warnings and potentially saving lives and property.\nOpenGate and the Automation Rules # The integration of OpenGate\u0026rsquo;s automation rules with our AI inference engine marked a significant leap in our project\u0026rsquo;s capabilities. These rules allow OpenGate to interact with the inference engine, triggering alarms and mobilizing response mechanisms in the event of a high avalanche risk detection. This automation not only enhances the speed of response but also ensures accuracy and reliability in avalanche prediction.\nConclusion: A Step Forward in Disaster Management # Our project is more than a technological endeavor; it is a commitment to safety and proactive disaster management. The collaborative efforts between Amplía Soluciones, Beia Consult International, and other stakeholders have culminated in a system that not only predicts snow avalanches but also paves the way for advanced natural disaster management strategies.\nAs we continue to refine and enhance MEWS, we stand at the forefront of innovation, demonstrating how technology can be a powerful ally in safeguarding communities against the unpredictable forces of nature.\n","date":"4 November 2021","externalUrl":null,"permalink":"https://carlos.enredando.me/publications/mews/","section":"Publications","summary":"This article introduces a Cloud-based platform designed for detecting and predicting snow avalanches. It emphasizes the process of collecting sensor data, storing it in the cloud, and using it as input for Machine Learning algorithms. The paper outlines the data workflow, system functionalities, and the essential data validation and pre-processing steps required before employing Machine Learning. Additionally, it discusses the integration of these Machine Learning features with the OpenGate cloud platform.","title":"MEWS - an IoT and Cloud-Based avalanche detection and prediction platform","type":"publications"},{"content":"","date":"4 November 2021","externalUrl":null,"permalink":"https://carlos.enredando.me/publications/","section":"Publications","summary":"","title":"Publications","type":"publications"},{"content":"","date":"4 November 2021","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/source-themes/","section":"Tags","summary":"","title":"Source Themes","type":"tags"},{"content":" Regulation and digitalization # Regulation has led to the massive adoption of Smart Metering and has shown us the enormous possibilities that the digitization of electricity grids offers.\nWe are beginning a new technological evolution in the electricity sector. Still, it also applies to other segments of the industry that have not received the regulatory push.\nOther utilities are also adopting automated consumption reading and digitization of their distribution networks. We are in a moment of hatching the Smart Grids. No company in the sector does not have initiatives in this area.\nWhile the various regulations have focused on ensuring and guaranteeing the proper functioning and improvement of the service to end customers, the utilities will benefit from advances in their management through the incorporation of sensors in all their elements and the automation of all business processes in real-time.\nBenefits of digitization # The possibilities of improved efficiencies, cost reduction, failure prevention, and early problem detection are difficult to quantify today. Utilities will undoubtedly conquer those features, and they less and less question the excellent business cases for investment in digitization.\nReal-time monitoring of all network elements will allow complete inventory control, with the identification and location of all assets, their lifecycle, status, remote updates of their configurations or firmware, and a proper association of the communications provided. This way, utilities obtain a better knowledge of the distribution networks\u0026rsquo; current state, operation, and maintenance. Utilities can achieve significant economic savings by reducing operational costs, guaranteeing the security of supply to avoid failures and accidents, and allowing them to update the products and services utilities offer to customers.\nThe challenges of the future # This digitization of power grids becomes imperative as we face the future scenario: when the presence of electric cars will no longer be anecdotal and will become dominant. Another driver of these needs will be the proliferation of highly distributed renewable energy sources and the need to manage a decentralized distribution network.\nToday, we find that IoT operations implementations have gaps and opportunities for improvement.\nIt is essential to make the right decisions now that the utilities are creating the foundations for the future Smart Grid. Taking a far-sighted view of the new capabilities and functionalities required by the IoT infrastructure, the utilities will improve effectiveness and add agility by integrating the devices, communications, and platforms implemented in IoT solutions. It is necessary to have the tools with these capabilities and functionalities to address the challenges that the Smart Grid will face in the coming years.\nConclusions # amplía)))) has the necessary capabilities and experience to accompany power generation and distribution companies in the challenge of Smart Grids, not in vain the company Berg Insight in its report IoT Platforms and Software - 4th Edition, references amplía))) as one of the world leaders in device management in the utility sector.\n","date":"22 June 2020","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/device-management-utilities/","section":"Posts","summary":"Regulation has led to the massive adoption of Smart Metering and has shown us the enormous possibilities that the digitization of electricity grids offers.","title":"Device Management in utilities","type":"posts"},{"content":"","date":"1 April 2020","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/artificial-intelligence/","section":"Tags","summary":"","title":"Artificial-Intelligence","type":"tags"},{"content":"","date":"1 April 2020","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/lora/","section":"Tags","summary":"","title":"Lora","type":"tags"},{"content":"","date":"1 April 2020","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/lora-wan/","section":"Tags","summary":"","title":"Lora-Wan","type":"tags"},{"content":"MEWS is a European Eurostars project aiming to be an early snow avalanche detection system backed by the Artificial Intelligence Inference Engine developed by amplía)))\u0026rsquo;s for its OpenGate IoT platform.\nI\u0026rsquo;ve managed the team who work on this exciting project. I\u0026rsquo;ve also helped with the architecture of the whole system: artificial intelligence inference engine, LoRa WAN connector, time series storing performance, and device management customizations.\nLoRa sensors deployed on the field collect data on a valley in which avalanches occur every winter. Then the sensors transmit the data to a local LoRa WAN gateway which sends it to the OpenGate IoT platform LoRa WAN connector. OpenGate adapts and stores in the best way possible the received data.\nA group of data scientists downloads the curated data from OpenGate to train an Artificial Intelligence model. When the model is ready, the data scientist deploys it into the Inference Engine using OpenGate\u0026rsquo;s REST API.\nCharged with the AI model, OpenGate can predict snow avalanches to help people avoid risks.\n","date":"1 April 2020","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/amplia-eu-mews/","section":"Projects","summary":"Artificial intelligent system for early detection of snow avalanches","title":"MEWS","type":"projects"},{"content":"","date":"14 August 2019","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/mqtt/","section":"Tags","summary":"","title":"MQTT","type":"tags"},{"content":"","date":"14 August 2019","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/opengate/","section":"Tags","summary":"","title":"OpenGate","type":"tags"},{"content":"This post shows how to use the OpenGate MQTT connector.\nThe MQTT protocol provides a lightweight method of carrying out messaging using a publish/subscribe model. These features make it suitable for Internet of Things messaging, such as with low-power sensors or mobile devices such as phones, embedded computers, or microcontrollers.\nIn MQTT, there are a few basic concepts that you need to understand:\nPublish/Subscribe - In a publish and subscribe system, a device can publish a message on a topic or be subscribed to a particular topic to receive messages. Messages - Messages are the information you want to exchange between your devices. These messages can either be a command or data. Topics - Topics are the way you register interest in incoming messages or how you specify where you want to publish the message. Broker - The broker is primarily responsible for receiving all messages, filtering them, deciding who is interested in them, and then publishing the message to all subscribed clients. In MQTT, a publisher (device/client) publishes messages on a topic, and a subscriber must subscribe to that topic to view the message.\nOpenGate MQTT connector # These are the connection data:\nHost: api.opengate.es Port: 1883 User: device_id Password: your_api_key How to obtain your API key # In addition, you\u0026rsquo;ll need your API key. Once you login onto the OpenGate web interface, you can find your API key by clicking on the cogs at the top-right of the OpenGate home page, then on the User option, and finally on the \u0026ldquo;Click to show\u0026rdquo; link.\nOpenGate Topics # To publish collected data: odm/iot/{device_id} To subscribe to incoming operations from OpenGate: odm/request/{device_id} To publish operation responses: odm/response/{device_id} To ask OpenGate for pending operations: odm/operationOnDemand/{device_id} You have to replace {device_id} with the OpenGate unique identifier of your device.\nPayload # The payload definition in the official OpenGate documentation to publish data is entirely valid. You only have to add a \u0026quot;device\u0026quot;: \u0026quot;device_id\u0026quot; field, filled with your OpenGate device unique identifier, at the top level of the JSON document with the collected values. See the following example:\n{ \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;device\u0026#34;: \u0026#34;device_id\u0026#34;, \u0026#34;datastreams\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;temperature\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1431602523123, \u0026#34;value\u0026#34;: 24.1 } ] } ] } Using mosquitto client # Eclipse Mosquitto is an open source (EPL/EDL licensed) message broker that implements the MQTT protocol versions 5.0, 3.1.1, and 3.1. Mosquitto is lightweight and suitable for all devices, from low-power single board computers to full servers.\nSubscribing to OpenGate MQTT operations topic with mosquito client # Your device must subscribe to odm/request/{device_id} topic to receive operation requests from the north API. Replace {device_id} with the provisioned device id:\n// extract from provision JSON ... \u0026#34;provision.device.identifier\u0026#34;: { \u0026#34;_value\u0026#34;: { \u0026#34;_current\u0026#34;: { \u0026#34;value\u0026#34;: \u0026#34;sensehat01\u0026#34; } } } ... Example using mosquitto_sub:\n# Debian/Ubuntu install: sudo apt-get install mosquitto-clients mosquitto_sub \\ -h api.opengate.es \\ -u \u0026#34;your-device-id\u0026#34; \\ -P \u0026#34;your-api-key\u0026#34; \\ -t \u0026#34;odm/request/your-device-id\u0026#34; Launch operations using the North API # The North API end-point for jobs and operations is https://api.opengate.es:443/north/v80/operation/jobs.\nEvery http request to the North or South API must include this header: X-ApiKey: your-api-key.\nExample using curl:\ncurl -X POST \\ -H \u0026#34;Content-Type: application/json\u0026#34; \\ -H \u0026#34;X-ApiKey: your-api-key\u0026#34; \\ -d @SET_DEVICE_PARAMETERS.json \\ https://api.opengate.es:443/north/v80/operation/jobs SET_DEVICE_PARAMETERS.json file used in the previous example:\n{ \u0026#34;job\u0026#34;: { \u0026#34;request\u0026#34;: { \u0026#34;operationParameters\u0026#34;: { \u0026#34;timeout\u0026#34;: 90000, \u0026#34;retries\u0026#34;: 0, \u0026#34;retriesDelay\u0026#34;: 0 }, \u0026#34;name\u0026#34;: \u0026#34;SET_DEVICE_PARAMETERS\u0026#34;, \u0026#34;schedule\u0026#34;: { \u0026#34;stop\u0026#34;: { \u0026#34;delayed\u0026#34;: 120000 } }, \u0026#34;parameters\u0026#34;: { \u0026#34;variableList\u0026#34;: [{ \u0026#34;name\u0026#34;: \u0026#34;f\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;180\u0026#34; }] }, \u0026#34;target\u0026#34;: { \u0026#34;append\u0026#34;: { \u0026#34;entities\u0026#34;: [\u0026#34;sensehat01\u0026#34;] } }, \u0026#34;active\u0026#34;: true } } } HTTP response:\n{ \u0026#34;id\u0026#34;: \u0026#34;0898f8e6-6773-43d4-ac34-fab214581463\u0026#34;, \u0026#34;request\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;SET_DEVICE_PARAMETERS\u0026#34;, \u0026#34;parameters\u0026#34;: { \u0026#34;variableList\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;r\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;180\u0026#34; } ] }, \u0026#34;active\u0026#34;: true, \u0026#34;notify\u0026#34;: false, \u0026#34;user\u0026#34;: \u0026#34;sensehat@amplia.es\u0026#34;, \u0026#34;schedule\u0026#34;: { \u0026#34;stop\u0026#34;: { \u0026#34;delayed\u0026#34;: \u0026#34;120000\u0026#34; }, \u0026#34;scattering\u0026#34;: { \u0026#34;strategy\u0026#34;: {} }, \u0026#34;window\u0026#34;: { \u0026#34;weekly\u0026#34;: [ { \u0026#34;daily\u0026#34;: {} } ] } }, \u0026#34;operationParameters\u0026#34;: { \u0026#34;timeout\u0026#34;: 90000, \u0026#34;retries\u0026#34;: 0, \u0026#34;retriesDelay\u0026#34;: 0 } }, \u0026#34;report\u0026#34;: { \u0026#34;execution\u0026#34;: {}, \u0026#34;target\u0026#34;: {} } } If you didn\u0026rsquo;t provision the parameter to be set in a data stream, you\u0026rsquo;ll receive the following response from the North API:\n{ \u0026#34;errors\u0026#34;: [ { \u0026#34;code\u0026#34;: \u0026#34;0x020003\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;At least one valid reference to an entity (list of entities or tags) is required.\u0026#34; } ] } Processing operations on the device side # OpenGate operation routing # OpenGate routes operations to specific connectors depending on:\nspecific manufacturer and model pair or specific operations. For the sake of this example, let\u0026rsquo;s assume routing by manufacturer and model; thus, the provisioning must be (OpenGate/OpenGateMqtt):\n... \u0026#34;provision.device.model\u0026#34;: { \u0026#34;_value\u0026#34;: { \u0026#34;_current\u0026#34;: { \u0026#34;value\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;OpenGateMqtt\u0026#34;, \u0026#34;manufacturer\u0026#34;: \u0026#34;OpenGate\u0026#34; } } } } ... Example operation: SET_DEVICE_PARAMETERS # Parameters to be set must be configured in a data model as data streams and must be WRITABLE (and only WRITABLE).\nPublishing operation responses to OpenGate MQTT connector with mosquito client # OpenGate only supports one MQTT session at the same time. Don\u0026rsquo;t connect two mosquitto_pub or mosquitto_sub clients simultaneously.\nThe following example publishes a JSON payload stored on SET_DEVICE_PARAMETERS_RESPONSE.json file with an example of the response required by OpenGate Operations Engine:\n# Debian/Ubuntu install: sudo apt-get install mosquitto-clients mosquitto_pub \\ -h api.opengate.es \\ -u \u0026#34;your-device-id\u0026#34; \\ -P \u0026#34;your-api-key\u0026#34; \\ -t \u0026#34;odm/response/your-device-id\u0026#34; \\ -f SET_DEVICE_PARAMETERS_RESPONSE.json The following code is an example of a response SET_DEVICE_PARAMETERS_RESPONSE.json file:\n{ \u0026#34;version\u0026#34;: \u0026#34;7.0\u0026#34;, \u0026#34;operation\u0026#34;: { \u0026#34;response\u0026#34;: { \u0026#34;deviceId\u0026#34;: \u0026#34;sensehat01\u0026#34;, \u0026#34;timestamp\u0026#34;: 1599810726318, \u0026#34;name\u0026#34;: \u0026#34;SET_DEVICE_PARAMETERS\u0026#34;, \u0026#34;id\u0026#34;: \u0026#34;f5d56834-63b8-45d8-93cb-5fdfcc30de9b\u0026#34;, \u0026#34;resultCode\u0026#34;: \u0026#34;SUCCESSFUL\u0026#34;, \u0026#34;resultDescription\u0026#34;: \u0026#34;Success\u0026#34;, \u0026#34;steps\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;SET_DEVICE_PARAMETERS\u0026#34;, \u0026#34;timestamp\u0026#34;: 1599810726318, \u0026#34;result\u0026#34;: \u0026#34;SUCCESSFUL\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Parameters set ok\u0026#34;, \u0026#34;response\u0026#34;: [] } ] } } } Publishing IoT data # # Debian/Ubuntu install: sudo apt-get install mosquitto-clients mosquitto_pub \\ -h api.opengate.es \\ -u \u0026#34;your-device-id\u0026#34; \\ -P \u0026#34;your-api-key\u0026#34; \\ -t \u0026#34;odm/iot/your-device-id\u0026#34; \\ -f iot_data.json MQTT connector supports the payloads defined in the OpenGate\u0026rsquo;s public documentation, with the following extra considerations:\nAdditionally, the JSON message must include the field \u0026quot;device\u0026quot;: \u0026quot;your-device-id\u0026quot;. This field is required when you\u0026rsquo;re using the OpenGate MQTT connector.\n{ \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;device\u0026#34;: \u0026#34;your-device-id\u0026#34;, \u0026#34;datastreams\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;temperature.from.pressure\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1529398727, \u0026#34;value\u0026#34;: 28 } ] }, { \u0026#34;id\u0026#34;: \u0026#34;temperature.from.humidity\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1529398727, \u0026#34;value\u0026#34;: 30 } ] }, { \u0026#34;id\u0026#34;: \u0026#34;pressure\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1529398727, \u0026#34;value\u0026#34;: 957 } ] }, { \u0026#34;id\u0026#34;: \u0026#34;humidity\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1529398727, \u0026#34;value\u0026#34;: 40 } ] } ] } ","date":"14 August 2019","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/og-mqtt-quick-guide/","section":"Posts","summary":"Learn how to use OpenGate MQTT connector.","title":"OpenGate MQTT connector","type":"posts"},{"content":"This post shows how to use the OpenGate demo UDP connector.\nThe OpenGate demo connector is not ready for production use. It lacks minimal security features like encryption and will be removed or changed soon. Use it at your own risk. Access data # Some UDP libraries and environments need domain name resolution. In those cases, api.opengate.es must be previously resolved to its IP address. At the time of this document, the IP address for api.opengate.es was 35.157.105.177. Please check it before starting your test.\nIf you\u0026rsquo;re like me and are using a Linux machine, type nslookup:\ncharlie@dune\u0026gt; nslookup api.opengate.es Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: api.opengate.es Address: 35.157.105.177 Host Name: api.opengate.es Host IP: 35.157.105.177 UDP Port: 10001 How to obtain your API key # In addition, you\u0026rsquo;ll need your API key. Once you login onto the OpenGate web interface, you can find your API key by clicking on the cogs at the top-right of the OpenGate home page, then on the User option, and finally on the \u0026ldquo;Click to show\u0026rdquo; link.\nPayload # The payload definition available in the official OpenGate documentation is entirely valid for data publishing. You only have to add, at the top level of the JSON document, the following fields:\na \u0026quot;device\u0026quot;: \u0026quot;your-device-id\u0026quot; field, filled with your OpenGate device unique identifier. a \u0026quot;apikey\u0026quot;: \u0026quot;your-api-key\u0026quot; field, filled with your OpenGate API key. See the following example:\nThis example shows how to send data points to two data streams:\nentity.location: this is one of the default data streams provided out of the box by OpenGate. temperature: this is a custom data stream to send temperature. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 { \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;device\u0026#34;: \u0026#34;your-device-id\u0026#34;, \u0026#34;apikey\u0026#34;: \u0026#34;your-api-key\u0026#34;, \u0026#34;datastreams\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;device.temperature.value\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1545129674, \u0026#34;value\u0026#34;: 23 } ] }, { \u0026#34;id\u0026#34;: \u0026#34;entity.location\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1545129674, \u0026#34;value\u0026#34;: { \u0026#34;position\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;Point\u0026#34;, \u0026#34;coordinates\u0026#34;: [-3.6737821999999993, 40.4475705] } } } ] } ] } ","date":"18 December 2018","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/og-udp-quick-guide/","section":"Posts","summary":"Learn how to use OpenGate demo UDP connector.","title":"OpenGate Demo UDP connector","type":"posts"},{"content":"","date":"18 December 2018","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/udp/","section":"Tags","summary":"","title":"UDP","type":"tags"},{"content":"","date":"5 November 2018","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/coap/","section":"Tags","summary":"","title":"CoAP","type":"tags"},{"content":"This post shows how to use OpenGate Constrained Application Protocol - CoAP connector.\nAccess data # Some CoAP libraries and environments need domain name resolution. In those cases, api.opengate.es must be previously resolved to its IP address. At the time of this document, the IP address for api.opengate.es was 35.157.105.177. Please check it before starting your test.\nIf you\u0026rsquo;re like me and are using a Linux machine, type nslookup:\ncharlie@dune\u0026gt; nslookup api.opengate.es Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: api.opengate.es Address: 35.157.105.177 Host: api.opengate.es or 35.157.105.177 (see previous warning note) CoAP Port: 5683 CoAP URI:coap://35.157.105.177:5683/v80/devices/collect/iot Custom options # OpenGate CoAP connector uses some options:\n2502: api_key 2503: device_id 2504: message_protocol_version Payload # The payload definition in the official OpenGate documentation to publish data is entirely valid. You only have to add, at the top level of the JSON document, the following fields:\na \u0026quot;device\u0026quot;: \u0026quot;your-device-id\u0026quot; field, filled with the OpenGate device unique identifier. a \u0026quot;apikey\u0026quot;: \u0026quot;your-api-key field, filled with your OpenGate API key. See the following example:\n{ \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;device\u0026#34;: \u0026#34;your-device-id\u0026#34;, \u0026#34;datastreams\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;temperature\u0026#34;, \u0026#34;datapoints\u0026#34;: [ { \u0026#34;at\u0026#34;: 1431602523123, \u0026#34;value\u0026#34;: 24.1 } ] } ] } ","date":"5 November 2018","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/og-coap-quick-guide/","section":"Posts","summary":"Learn how to use the OpenGate CoAP connector.","title":"OpenGate CoAP connector","type":"posts"},{"content":"In our consultancy sessions with clients, we often emphasize the importance of distinguishing between assets and devices in IoT projects.\nThe parking - sensor example # To illustrate this concept, let\u0026rsquo;s consider the example of a digitalized parking system. One of the key features we want to offer customers is real-time information on the number of available parking spaces in the entire parking lot, on each floor, and in each row, as well as green/red lights above each parking spot to indicate occupancy status.\nArchitecture # The solution architecture could have:\nAn occupancy sensor in each parking place. An edge computing system capable to: collect the data of each occupancy sensor; calculate the number of free places in the whole parking, on each floor, and in each row; send the number of free places to the appropriate displays; and at the end, send the collected and calculated information to the IoT platform for analytic purposes. Displays to show the number of free places. An IoT platform to collect and show information and analytics about the information sent by the edge system. Where are the assets # While this architecture includes various devices such as sensors, displays, and edge computers, it\u0026rsquo;s important to note that the assets in this scenario are the parking places themselves. By understanding this distinction, we can approach the digitalization of these assets with a more strategic and holistic perspective, ultimately leading to more effective and successful IoT projects.\n","date":"27 July 2018","externalUrl":null,"permalink":"https://carlos.enredando.me/posts/iot-assets-and-devices/","section":"Posts","summary":"A well-architected IoT solution information requires a good comprehension of how business assets and devices complement each other.","title":"Assets vs. Devices in IoT solutions","type":"posts"},{"content":"The Carlos III University of Getafe in Madrid allowed me to present at the T3chFest of 2016, this time alone, a vision of how the Internet of Things - IoT can help the industrial sector to improve the efficiency of its production processes through digitization and monitoring.\nDuring the presentation, I showed the case of Neuronalia and MetScale as an end-to-end cloud solution for automatic collection of power consumption and signal quality for early detection of anomalies in industrial and electrical equipment operation.\n","date":"11 February 2016","externalUrl":null,"permalink":"https://carlos.enredando.me/talks/t3ch-fest/","section":"Last Talks","summary":"The roots of IoT and how to use it to improve Energy Efficiency.","title":"Internet de la Cosas Industrial aplicado a la eficiencia energética","type":"talks"},{"content":"On this occasion, my former colleague and now friend David Fernandez and I were fortunate to be chosen to present how the Internet of Things is applied to improve the energy efficiency of large consumers: industrial processes.\nDuring our presentation, we explained how the Internet of Things has been with us for some time, albeit with other names (telemetry, telecontrol, M2M), embracing many aspects of daily life, business, and information systems.\nWe were able to offer a less common vision of IoT, the Industrial Internet of Things - IIoT, showing its use as a tool to find new opportunities through improving Energy Efficiency in industrial and office buildings.\n","date":"27 November 2015","externalUrl":null,"permalink":"https://carlos.enredando.me/talks/code-motion/","section":"Last Talks","summary":"Both power companies and electricity consumers benefit from using IoT for energy efficiency.","title":"Internet de la Cosas Industrial aplicado a la eficiencia energética","type":"talks"},{"content":"","date":"1 January 2015","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/electricity/","section":"Tags","summary":"","title":"Electricity","type":"tags"},{"content":"","date":"1 January 2015","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/energy/","section":"Tags","summary":"","title":"Energy","type":"tags"},{"content":"I\u0026rsquo;ve helped on SMIP since its very beginning, leading the device management team to rise continuously the level of the service in terms of features and performance.\nSmart Metering Implementation Programme is an energy-industry-led program aiming to roll out approximately 53 million smart electricity and gas meters to domestic properties and non-domestic sites in Great Britain. The SMIP seeks to provide consumers with “real-time” information on their energy consumption to help them:\ncontrol and manage their energy use; save money and reduce carbon emissions; bring an end to estimated billing; and make informed purchasing decisions minimizing the barriers to switching between suppliers. It also aims to support Great Britain’s transition to a low-carbon economy and meet some of the challenges in ensuring an affordable, secure and sustainable energy supply.\n","date":"1 January 2015","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/amplia-smip/","section":"Projects","summary":"SMIP aims to roll out approximately 53 million smart meters in Great Britain","title":"SMIP - Smart Metering Implementation Program","type":"projects"},{"content":"Silicon Alley Madrid invited me to participate as a speaker on this \u0026ldquo;First Day of Technology at the Service of Energy Efficiency\u0026rdquo;.\nI prepared the event with great enthusiasm and was very happy to be allowed to help attendees learn that it is possible, and very real, the use of Internet of Things technologies to optimize energy consumption. I demonstrate how active monitoring of electricity consumption and the quality of the signal delivered by power distribution companies can help reduce the electricity bill and improve the environment.\nI was able to perform a live demonstration of a commercial monitoring service that I helped to develop. This service has enabled companies and organizations to substantially reduce their electricity bills and increase the lifetime of their electrical appliances.\nThe event organizers also aimed to promote sustainable energy consumption, waste reduction, and optimal waste treatment.\n","date":"14 October 2014","externalUrl":null,"permalink":"https://carlos.enredando.me/talks/silicon-alley/","section":"Last Talks","summary":"This talk brought insights into how to use IoT technology to improve energy efficiency.","title":"La tecnología al servicio de la eficiencia energética","type":"talks"},{"content":"","date":"20 May 2010","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/agile/","section":"Tags","summary":"","title":"Agile","type":"tags"},{"content":"In May 2010, I had the privilege of attending as a speaker at the First SME Management Forum organized by Bureau Veritas and EOI - Escuela de Organización Industrial in Madrid.\nDuring my presentation, I was able to explain the mixed Scrum + Kanban approach that we apply to the agile development of products and services in the development team that I have had the good fortune to lead for 20 years in amplía))).\nAs can be deduced from the date of the presentation (12 years ago) at amplía)))) we are pioneers in implementing and using agile methodologies with a practical experience of 12 years of applying these techniques daily.\n","date":"20 May 2010","externalUrl":null,"permalink":"https://carlos.enredando.me/talks/eoi-agilizando/","section":"Last Talks","summary":"How to apply agile methodologies and methods to a product development.","title":"Agilizando la gestión de proyectos","type":"talks"},{"content":"","date":"20 May 2010","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/kanban/","section":"Tags","summary":"","title":"Kanban","type":"tags"},{"content":"","date":"20 May 2010","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/scrum/","section":"Tags","summary":"","title":"Scrum","type":"tags"},{"content":"","date":"1 January 2009","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/android/","section":"Tags","summary":"","title":"Android","type":"tags"},{"content":"I developed a native Android application to listen live music streamed from a music club in Madrid.\nThe project was so challenging in 2009 because of the lack of native support in the Android OS for music streaming coming from an Shoutcast radio streaming server.\nAdditionally, I had to find a way to extract the meta-information from the music streaming to show the author, album, and song title during the music playing.\nIt was a fun project, and I was very proud to see my app on the Play Store in those days.\n","date":"1 January 2009","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/consultant-android-radio-app/","section":"Projects","summary":"Android Application to listen live music from a club in Madrid","title":"Android Live Streaming Radio App","type":"projects"},{"content":"","date":"1 January 2009","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/consultancy/","section":"Tags","summary":"","title":"Consultancy","type":"tags"},{"content":"","date":"1 January 2009","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/development/","section":"Tags","summary":"","title":"Development","type":"tags"},{"content":"I\u0026rsquo;ve been helping Endesa\u0026rsquo;s Enterprise Monitoring Center for more than ten years to carry out its Digital Transformation journey leaned on amplía)))\u0026rsquo;s OpenGate IoT platform. This Monitoring Center is part of Endesa\u0026rsquo;s Teleservices Control Center belonging to the Telecom department.\nNowadays, Endesa, part of the Enel Group, is using OpenGate to support its daily operations in Spain and a growing set of LatAm countries: Brasil, Chile, Colombia, etc., and probably Italy soon. These operations consist of supporting new smart meters and concentrator installations and solving issues with the ones already installed.\nThe system we built provides a unified approach to monitoring, supervision, and integral control of different vertical business services in the company, covering all the phases needed to solve the remote device management of both concentrators and smart meters.\n","date":"1 December 2008","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/amplia-endesa-atlas/","section":"Projects","summary":"Endesa deployed 140 thousand concentrators to support its 11 million smart meters, thanks to OpenGate","title":"Atlas Endesa - OpenGate Device Management \u0026 Monitoring","type":"projects"},{"content":"I helped several companies to obtain the ISO/IEC 33002:2015 certification by working as consultant for Bureau Veritas helping several companies to get the official ISO/IEC 33002:2015certification, formerly known as ISO/IEC 15504 (SPICE).\nI gained a deep knowledge of how the ISO certification processes proceed and how to prepare well to pass the certification assessment.\nBesides the goal of passing the certification appraisal, I helped the companies to enter into a positive loop of continuous improvement not only to maintain the certification but get better development, testing, and production deployment processes in the future.\nI was very proud of knowing that all the companies I helped obtained the certification.\n","date":"27 April 2008","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/consultant-bureau-veritas/","section":"Projects","summary":"I helped several companies to obtain the ISO/IEC 33002:2015 certification","title":"Consultant for ISO/IEC 33002:2015","type":"projects"},{"content":"","date":"27 April 2008","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/iso/","section":"Tags","summary":"","title":"Iso","type":"tags"},{"content":"I helped a USA company develop part of the backend of a mobile application for movie theaters tickets reservation. The development was a backend component for integrating vendor alert systems to enable push notifications from server applications.\n","date":"1 January 2007","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/consultant-push-notifications/","section":"Projects","summary":"Backend system to send real time push notifications to mobile devices","title":"iOS \u0026 Black Berry push notification system","type":"projects"},{"content":"Geo-location system for Amena (acquired later by Orange). Telecom grade system to enable an enterprise integration with mobile operator geo-location features (as subcontracted staff).\n","date":"1 January 2002","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/teamlog-geoloc/","section":"Projects","summary":"Geo-location system for Amena (acquired later by Orange). Telecom grade system to enable an enterprise integration with mobile operator geo-location features (as subcontracted staff).\n","title":"Geo-location System for Amena","type":"projects"},{"content":"","date":"1 January 2002","externalUrl":null,"permalink":"https://carlos.enredando.me/tags/java/","section":"Tags","summary":"","title":"Java","type":"tags"},{"content":"I was the project manager and main developer of an XML to CICS and CICS to XML backend translator for on-line banking at Bankinter.\nThe project had strong, high-performance requirements. All online transactions, internet, and telephone were passing through this system in those days through this component.\n","date":"1 October 2001","externalUrl":null,"permalink":"https://carlos.enredando.me/projects/teamlog-cics/","section":"Projects","summary":"Backend service to","title":"XML \u003c-\u003e CICS high performance translator","type":"projects"},{"content":"","externalUrl":null,"permalink":"https://carlos.enredando.me/series/","section":"Series","summary":"","title":"Series","type":"series"}]