Self-Aware Blockchain

Photo by Icons8 Team on Unsplash


Imagine life in 5, maybe 10 years (and not everywhere, because the future is not distributed equally). For some people the day will look like this: They wake up and are greeted by their AI concierge. IT may live on their phone, their voice device, it may have a physical form or a friendly robot or a very physically attractive human, it can even interact using only a neural link directly connected to their brain.

The AI assistant/concierge will manage, assist, navigate and take care of every possible aspect of their life. Think of Tony Stark’s Jarvis from the Iron Man or the Avengers movies. It will prepare an optimal meal plan based on the current and desired health, body weight, or specific preferences (plant-based, paleo, gluten-free, etc.). It will then order the food items from the best possible place, have them delivered and even book suitable restaurants, update the calendar and create travel plans sent to navigation apps in cars, e-bikes, etc. Meal plans will be negotiated with other AIs – that manage the lives of friends, family, and the restaurants.

All payments, data transfers, signing up / in / off will happen in the background, completely abstracted from the users. There will be no payment cards, no front-end email apps, social media app. Only APIs talking to AIs.

This will happen in business as well. AIs will find the best employees after negotiating with their AIs, checking verified education and employment record, and trusted, persistent reputation of the employees, subcontractors, etc. Supply chains and production facilities will be AI-run. Self-operated flying cars will be designed and operated by AIs. You get the picture. (On another note, many current jobs will be completely redundant and obsolete, and it’s interesting to think what new jobs will emerge. And will they?).

On the back end, to make this possible there are “only” 2 things needed: 1) an enormous amount of computation resources everywhere 2) an enormous amount of high-quality data. This can be achieved by the likes of Google, Facebook and Amazon in the US and some parts of Western Europe, working hand in hand with the NSA and by their Chinese counterparts everywhere else in the world. This vision of the future is truly disturbing: the perfect slavery-with-extra-steps system controlled by the 1% of the 1% behind a polished, slick, Disneyland-like façade pushing consumerism even further.

or Vision?

The alternative can emerge from the intersection of technologies, concepts, and trends like open-source, peer-to-peer, decentralized identity, community ownership, and blockchain leading to the creation of one or more DAO, Decentralized Autonomous Organization.

The main purposes of the DAO will be to

  • maintain the open standards for decentralized identities, identity resolution, data validation, and sharing
  • coordinate an open, decentralized cloud infrastructure that can be used by anyone to manage their identities and data
  • develop open-source tools to govern, improve and develop further 1) and 2) above.

End users will be able to control their digital identities and data. For example, they will be able to access services and products without the need to create yet another login and password. They will just sign in using their digital wallet. Also, they will be able to choose how their data is hosted (locally, in a public cloud like AWS, in a p2p torrent-like network, or in a decentralized cloud) and who has access to it. In practical terms, it means that the switching costs go down to zero and there’s no vendor lock-in anymore. Also, inter-operability is available out of the box (imagine LinkedIn users messaging Instagram users).

Looks like this can solve the problem of having enormous amounts of high-quality data. What about compute? This is where the “self-aware blockchain” comes in. Currently, blockchain networks consist of homogenous nodes performing homogenous tasks: basically, all the nodes do the same simple task at the same time: run a piece of code (“smart contract”) and update the transaction ledger. The blockchain network has no ability to route different tasks to different nodes, divide the tasks, and has no information about the state of the nodes themselves. Now imagine a network that can autonomously manage its state based on the current and predicted demand for cloud services and based on the current and predicted state of each node in the network. The “current and predicted state” can include the technical aspect of the node, but also the regulatory, compliance, and risk management aspect, not to mention the economics of each node and the network as a whole (pricing, exchange rates, energy prices, etc.).

This would create a decentralized, self-optimizing, and self-managing cloud where each task would be processed in the most efficient way for this specific task. Cloud users would interact with a completely fluid, on-demand layer of services, completely abstracted from the physical networks, servers, and storage. On the supply side, cloud services are offered by a decentralized ecosystem of independent providers (professional data centers, cloud providers, and also individuals and companies offering their spare capacity). The self-aware blockchain will then take care of allocating workloads to the providers, take care of billing, payments, and so on.

So, when we say “self-aware blockchain”, we actually mean a decentralized autonomous organization self-aware of its network and all the nodes in terms of the technical, legal, and financial state, that uses AI and tokenized incentives to maximize the outcomes for all the participants. But “self-aware blockchain” is a bit less wordy…

The DAO that operates a decentralized, self-managed cloud infrastructure, open-source standard for self-sovereign identity, and data; that runs on a “self-aware” blockchain can be a viable infrastructure for a hopefully slightly brighter, less dystopian future.


Our core values are autonomy, openness and competence. We’re on a mission of making our clients feel in control, confident and secure. This is why the only long-term vision we can subscribe to is open-source and transparent, a vision with a Self-Aware Blockchain. Stay tuned for more updates on how this vision takes shape on Twitter or LinkedIn.

Practical tips that can help you control cloud costs

What would you do with all the money saved on cloud bills? [Photo by Tech Nick on Unsplash]

30% or more

You probably significantly overpay (by 30% or more) for cloud infrastructure and don’t even know it. Cloud services are complex and cloud billing is complicated and opaque. It may seem like cloud providers have you over a barrel (which is only partially true). Finally, cloud consumption and billing are difficult to predict. All of this means that cloud costs can be hard to manage or even to accurately identify. This is a big enough problem that there are consulting companies specializing in just minimizing cloud costs for clients.  

Executives estimate that at least 30 percent of their cloud spending is wasted. 
— Forbes / State of the Cloud Report 2020

I’d like to share a few practical, actionable tips and suggestions that can help you lower your cloud infrastructure costs. Most can be implemented in-house, by yourself, without the need to hire external help. If you have any questions, anything needs further clarification or if I missed mentioning something, please don’t hesitate to comment or contact me on Twitter or LinkedIn.

First I’ll cover some general principles, concepts, and culture related to managing IT infrastructure and development that result in lower cloud bills. In part 2 I’ll list some specific, practical actions that can save you a few dollars here and there.

Part 1 — General principles

Your cloud provider is your friend

Your cloud provider wants you to succeed. For them, successful clients mean long-term business, so they will want to help you with whatever they can. So, don’t be shy and talk to your cloud account manager. Tell them about your project, your needs. They will be in a very good position to recommend the best technology. They are also the people who will explain to you in detail how the billing works, what you should expect to pay, and what each line item means. Moreover, they will be able to provide you with lots of resources about the technology you are using, so you get the most mileage of every dollar you spend with them. Sure, there’s a risk they will try to upsell/cross-sell you more services, but educating yourself about the billing, terms and conditions, the technology, and its applications reduces information asymmetry and gives you more bargaining power.

Grow organically / Right-size

The only way to future-proof your IT infrastructure is to stay flexible (which I’ll cover later). Over-engineering, over-sizing, and “pre-optimizing” will only cost you more money and will never work for two simple reasons: 1) business requirements change very quickly and 2) technology changes even faster. Trying to design solutions and architecture that will be optimized 12 or even 6 months from now is a waste of time. Focus on what is needed right now, optimize for the current business process (meaning use the right tool for the task at hand) and stay flexible.

The most trivial example of over-sizing is buying too large servers that are constantly underutilized. The most prominent example of overengineering is using Kubernetes. It’s complicated, eats up a lot of resources (both server resources and talent), and is an overkill for at least 90% of companies. Unless you’re Netflix, you’re probably better with a cloud-native container service (Amazon Elastic Container Service or Azure Container Service).

Another activity in the right-sizing category is managing the cloud auto-scaling rules for your services and servers. Monitor your cloud infrastructure continuously and keep the rules updated, so they match the actual profile of the demand. Pay attention to overutilization limits and overcharges.

Some tips in the “right tool for the task at hand” category include: 

  • Pick the right solution to store data. Cold storage (like Amazon Glacier) should only be used as… cold storage, for backup. Write once and (hopefully) don’t ever read. If you use it for writing and reading, it will get very expensive. 
  • Use object storage (Blob or S3) where possible, for storing photos, videos, other large files for example. It’s cheaper than using block (disk) storage, which needs to be used to store your server image. 
  • Leverage content delivery network (CDN, like CloudFront or Cloudflare) for serving your static websites. They are almost free then, because they are served from the CDN and local browser cache, so you don’t pay for bandwidth and your server doesn’t do any work either.

Optimize the code and architecture

Hardware engineers say that all the efficiency gains from Moore’s law will be wasted by the software engineers… Optimized code and architecture will probably save you the most money in the long term. The differences in cloud operating costs between a good and a bad code/architecture can be huge: we’ve seen a 2x — 5x change in costs.

This is a subject worth a separate article, if not a book, so just very briefly, here are some simple, actionable suggestions. First, use garbage collection in the software you develop to make sure the applications manage memory efficiently. Manage CPU processes on the servers — kill dead or idle processes. You’ll need less RAM and CPU to run your services, so it will cost less. Second, avoid deleting database entries. Delete and rewrite takes more transactions than update, so will cost you more in IO activity. Finally, if possible, move all your servers into the same zone. The traffic between them will be cheaper, hopefully free.

Stay flexible and automate

Automation is probably the second biggest cost saver. It mainly reduces the internal resources (time of the engineers) spent on infrastructure. The cost savings are not immediately visible in your cloud bills but are rather reflected in productivity gains (deployment speed, performance, reliability, etc).

Automating your IT infrastructure management means introducing scripts, rule-based actions, and applications to assist the developers, DevOps, or system engineers with their tasks and preferably eliminate routine, tedious tasks. The big cloud providers offer good management automation tools under the umbrella of Infrastructure as Code (IaC). These tools, like Amazon’s CloudFormation, do a great job handling infrastructure provisioning and management. A piece of code (instead of a human operator) can launch a server with a specific service at the right moment, execute for as long as it’s needed, and then kill it to conserve resources and save costs.

Containerization is another great way to increase flexibility and automation. Putting your software into containers means that they can be launched on any infrastructure and the software will work the same. The same container can be moved around different servers, VMs, or clouds. Containers also scale quickly and can be orchestrated/ managed automatically. All of this means more efficient use of resources = lower costs. And remember, Kubernetes is one of the container orchestration solutions, not a containerization technology. Docker (the most popular containerization technology) containers can be managed by many different orchestration solutions, including native cloud solutions, Kubernetes, and others (for example Docker Swarm).

Once you have containers and IaC in place, you can work on “flattening the curve” of your cloud consumption by moving routine and predictable tasks around. Backup, migration, reporting activities, or any processes you can control can be scheduled when there’s usually a dip in the demand for your services (late at night? over the weekend?), or at least not during the peak (don’t do anything extra on a black Friday if you’re running an e-commerce website…).

Finally, if possible, don’t use proprietary solutions. Instead of buying a branded database service from a big cloud provider, buy a vanilla server from them and launch an open-source database inside a container on this server. This will be cheaper and will give you much more flexibility and control over your infrastructure. You’ll be able to migrate between clouds easier and use multi-cloud infrastructure in a much more convenient way.

Part 2 — Practical tips

Here are some very specific, maybe obvious aspects of your cloud infrastructure that can be managed to help eliminate some of the IT costs immediately.

Pay attention to your bill and turn off what you don’t use

Go through your bill line by line and see if you are using everything you pay for. It sounds obvious, but we still find a surprising amount of unused VMs and services that no one is using. Someone may have created a test environment 6 months ago that no one is using, there’s an old static website running on a server, things like that. So, make a backup of anything that you can use in the future (like the test environment or the website) and kill the server/service. You’re always able to restore it from the backup if needed. This part also includes deleting unused IP addresses (some cloud providers charge you when you don’t use them).

Keep in mind that some auxiliary services are not going to be deleted when you delete the main service. If you delete a server pay attention to and manually get rid of things like disk, snapshot, static IP, or a Windows server license.

Consolidate services and use one larger server

One bigger server is cheaper than the sum total of small servers of the same total size. If you can, use one bigger server instead of several smaller ones separate for each service. This will work great especially for services that constantly underutilized the servers, like static websites. You can probably launch tens of static websites on one small server instead of using separate servers for each website.

Use reserved instances

On-demand pricing is great because you only pay for what you actually use. But, chances are there is a baseline, a minimum consumption level for your cloud infrastructure. It’s much cheaper to use the reserved instances to cover this baseload and only pay for on-demand “peaks”. Here’s a simple illustration of this concept. Additional savings can be generated if you pre-pay for the reserved cloud services.

Renegotiate prices

Every price is negotiable. The potential savings depend on your bargaining power, but I’m almost certain that every cloud provided will offer you some sort of a discount when you threaten to leave. I was once offered a 50% discount off a $3 server when I told them I want to close my account! It was a small hosting company, but even the big cloud providers are more flexible now. A few years back AWS was a de-facto monopoly, but right now Microsoft is breathing down their neck, Google is closing the distance and there’s a number of other formidable competitors with deep pockets (=aggressive pricing) including Alibaba Cloud, Oracle, IBM, and Tencent Cloud.

What usually works best is calling the competing cloud providers (if you’re with AWS, call Azure and GCP) and asking them for a quote for your current setup. Chances are it will be notably lower (20%-25%). Then take this quote and show it to your AWS guys, ask if they can match it. In most cases, they will offer you a discount and if you have the patience, you iterate to get the best results (i.e. take the new, discounted AWS prices to Azure and so on).

I hope you’ll be able to use some of these suggestions and lower your cloud bills. Please let us know (directly or on our Twitter) if you have any questions or would like to know more about a specific cloud cost-related subject. 

I’d like to thank my friend and Djuno co-founder Moe Sayadi for sharing his knowledge and experience about cloud infrastructure for the purpose of this post.

Djuno develops AI that helps you take back control over cloud costs.

Djuno AI is a light touch tool that predicts server utilization, identifies seasonality, and provides cost-saving tips and recommendations. It’s free and doesn’t require any registration.

You can check it out here: http://ai.

Sign up for updates!

* indicates required

Optimizing Cloud Costs in a Fast Growing Startup — Djuno Case Study

She’s just discovered how much money her startup can save with Djuno 😉
Photo by Christina @ on Unsplash 

Djuno helps companies get back control over cloud costs. In the process, we use both AI (Artificial Intelligence) and our natural intelligence.

One of the most common questions people ask is a variation of “what exactly are the cost savings” or “what are the average cost savings over a 5 year period”. It is one of these “how long is a piece of string” questions. It’s complex and depends on many factors, but we have a framework that can be used to estimate the savings and can provide some actual numbers for reference.

External cloud costs

Let’s start with real numbers. One of our clients, a fast growing, successful FinTech/ InsurTech startup used to pay low 5 figures per month (low 6 figures per year) out of pocket for external public cloud infrastructure. After our intervention, they are now paying around 30% less each month. But, we didn’t just turn off their servers. We redesigned the architecture, so they have now 2x more cloud compute and storage. So, on a “per unit” basis, the savings equal to some 65%. Over 5 years, the total cost savings amount to is a very significant, high 6 figures number and this will be even more evident when more cloud resources are needed, as the business grows.

In general, our conservative estimate is that companies can save around 30% of their cloud expenses. This is in line with recent research that confirms that about 30% of cloud spending is wasted [2020 Flexera state of the cloud report].

Internal resources

Additionally, companies can reduce their internal resources dedicated to managing infrastructure. A full time DevOps costs $100k / year and more. With Djuno, they can focus on developing instead of maintaining and patching the existing infrastructure. Djuno managed solutions (with our dedicated engineer) will cost about 30% of this (30k/year vs 100k) if a company wants to outsource the whole process. The bigger the infrastructure, the more DevOps or SysOps a company has, the more evident the savings are.

Opportunity costs and other business related items

Finally, there are costs that are difficult to quantify immediately, based on assumptions, but can be estimated on a case by case basis. These include the cost of technical debt (when a company doesn’t want to modernize because they don’t have any good migration solutions), costs of (non-)compliance (especially in financial services), the opportunity cost of utilizing all the data (due to the lack of tools to share it internally and externally) or cost of reporting and analytics where aggregation has to happen manually each time.

The above items have a profound impact on the whole business, especially when compounded over time. In general, the question is how much does it cost to have a business that is not flexible and has its culture and processes negatively impacted by obsolete IT architecture. With Djuno, the company’s IT environment becomes an asset and every piece of code is a reusable building block that can generate compound yield in the future (vs. a liability of technical debt and escalating costs of disposable code and inefficient infrastructure).

In summary: the cost savings are significant and the positive impact on the entire business is profound. You should try it! [BTW, we have a zero-risk, no commitment “return policy”: we deploy Djuno for you, estimate the impact, present you with the offer and if you don’t like it, we take Djuno back, no questions asked].

Sign up for updates!

* indicates required