Self-Aware Blockchain

Photo by Icons8 Team on Unsplash

Ultron?

Imagine life in 5, maybe 10 years (and not everywhere, because the future is not distributed equally). For some people the day will look like this: They wake up and are greeted by their AI concierge. IT may live on their phone, their voice device, it may have a physical form or a friendly robot or a very physically attractive human, it can even interact using only a neural link directly connected to their brain.

The AI assistant/concierge will manage, assist, navigate and take care of every possible aspect of their life. Think of Tony Stark’s Jarvis from the Iron Man or the Avengers movies. It will prepare an optimal meal plan based on the current and desired health, body weight, or specific preferences (plant-based, paleo, gluten-free, etc.). It will then order the food items from the best possible place, have them delivered and even book suitable restaurants, update the calendar and create travel plans sent to navigation apps in cars, e-bikes, etc. Meal plans will be negotiated with other AIs – that manage the lives of friends, family, and the restaurants.

All payments, data transfers, signing up / in / off will happen in the background, completely abstracted from the users. There will be no payment cards, no front-end email apps, social media app. Only APIs talking to AIs.

This will happen in business as well. AIs will find the best employees after negotiating with their AIs, checking verified education and employment record, and trusted, persistent reputation of the employees, subcontractors, etc. Supply chains and production facilities will be AI-run. Self-operated flying cars will be designed and operated by AIs. You get the picture. (On another note, many current jobs will be completely redundant and obsolete, and it’s interesting to think what new jobs will emerge. And will they?).

On the back end, to make this possible there are “only” 2 things needed: 1) an enormous amount of computation resources everywhere 2) an enormous amount of high-quality data. This can be achieved by the likes of Google, Facebook and Amazon in the US and some parts of Western Europe, working hand in hand with the NSA and by their Chinese counterparts everywhere else in the world. This vision of the future is truly disturbing: the perfect slavery-with-extra-steps system controlled by the 1% of the 1% behind a polished, slick, Disneyland-like façade pushing consumerism even further.

or Vision?

The alternative can emerge from the intersection of technologies, concepts, and trends like open-source, peer-to-peer, decentralized identity, community ownership, and blockchain leading to the creation of one or more DAO, Decentralized Autonomous Organization.

The main purposes of the DAO will be to

  • maintain the open standards for decentralized identities, identity resolution, data validation, and sharing
  • coordinate an open, decentralized cloud infrastructure that can be used by anyone to manage their identities and data
  • develop open-source tools to govern, improve and develop further 1) and 2) above.

End users will be able to control their digital identities and data. For example, they will be able to access services and products without the need to create yet another login and password. They will just sign in using their digital wallet. Also, they will be able to choose how their data is hosted (locally, in a public cloud like AWS, in a p2p torrent-like network, or in a decentralized cloud) and who has access to it. In practical terms, it means that the switching costs go down to zero and there’s no vendor lock-in anymore. Also, inter-operability is available out of the box (imagine LinkedIn users messaging Instagram users).

Looks like this can solve the problem of having enormous amounts of high-quality data. What about compute? This is where the “self-aware blockchain” comes in. Currently, blockchain networks consist of homogenous nodes performing homogenous tasks: basically, all the nodes do the same simple task at the same time: run a piece of code (“smart contract”) and update the transaction ledger. The blockchain network has no ability to route different tasks to different nodes, divide the tasks, and has no information about the state of the nodes themselves. Now imagine a network that can autonomously manage its state based on the current and predicted demand for cloud services and based on the current and predicted state of each node in the network. The “current and predicted state” can include the technical aspect of the node, but also the regulatory, compliance, and risk management aspect, not to mention the economics of each node and the network as a whole (pricing, exchange rates, energy prices, etc.).

This would create a decentralized, self-optimizing, and self-managing cloud where each task would be processed in the most efficient way for this specific task. Cloud users would interact with a completely fluid, on-demand layer of services, completely abstracted from the physical networks, servers, and storage. On the supply side, cloud services are offered by a decentralized ecosystem of independent providers (professional data centers, cloud providers, and also individuals and companies offering their spare capacity). The self-aware blockchain will then take care of allocating workloads to the providers, take care of billing, payments, and so on.

So, when we say “self-aware blockchain”, we actually mean a decentralized autonomous organization self-aware of its network and all the nodes in terms of the technical, legal, and financial state, that uses AI and tokenized incentives to maximize the outcomes for all the participants. But “self-aware blockchain” is a bit less wordy…

The DAO that operates a decentralized, self-managed cloud infrastructure, open-source standard for self-sovereign identity, and data; that runs on a “self-aware” blockchain can be a viable infrastructure for a hopefully slightly brighter, less dystopian future.

Djuno

Our core values are autonomy, openness and competence. We’re on a mission of making our clients feel in control, confident and secure. This is why the only long-term vision we can subscribe to is open-source and transparent, a vision with a Self-Aware Blockchain. Stay tuned for more updates on how this vision takes shape on Twitter or LinkedIn.

Practical tips that can help you control cloud costs

What would you do with all the money saved on cloud bills? [Photo by Tech Nick on Unsplash]

30% or more

You probably significantly overpay (by 30% or more) for cloud infrastructure and don’t even know it. Cloud services are complex and cloud billing is complicated and opaque. It may seem like cloud providers have you over a barrel (which is only partially true). Finally, cloud consumption and billing are difficult to predict. All of this means that cloud costs can be hard to manage or even to accurately identify. This is a big enough problem that there are consulting companies specializing in just minimizing cloud costs for clients.  

Executives estimate that at least 30 percent of their cloud spending is wasted. 
— Forbes / State of the Cloud Report 2020

I’d like to share a few practical, actionable tips and suggestions that can help you lower your cloud infrastructure costs. Most can be implemented in-house, by yourself, without the need to hire external help. If you have any questions, anything needs further clarification or if I missed mentioning something, please don’t hesitate to comment or contact me on Twitter or LinkedIn.

First I’ll cover some general principles, concepts, and culture related to managing IT infrastructure and development that result in lower cloud bills. In part 2 I’ll list some specific, practical actions that can save you a few dollars here and there.

Part 1 — General principles

Your cloud provider is your friend

Your cloud provider wants you to succeed. For them, successful clients mean long-term business, so they will want to help you with whatever they can. So, don’t be shy and talk to your cloud account manager. Tell them about your project, your needs. They will be in a very good position to recommend the best technology. They are also the people who will explain to you in detail how the billing works, what you should expect to pay, and what each line item means. Moreover, they will be able to provide you with lots of resources about the technology you are using, so you get the most mileage of every dollar you spend with them. Sure, there’s a risk they will try to upsell/cross-sell you more services, but educating yourself about the billing, terms and conditions, the technology, and its applications reduces information asymmetry and gives you more bargaining power.

Grow organically / Right-size

The only way to future-proof your IT infrastructure is to stay flexible (which I’ll cover later). Over-engineering, over-sizing, and “pre-optimizing” will only cost you more money and will never work for two simple reasons: 1) business requirements change very quickly and 2) technology changes even faster. Trying to design solutions and architecture that will be optimized 12 or even 6 months from now is a waste of time. Focus on what is needed right now, optimize for the current business process (meaning use the right tool for the task at hand) and stay flexible.

The most trivial example of over-sizing is buying too large servers that are constantly underutilized. The most prominent example of overengineering is using Kubernetes. It’s complicated, eats up a lot of resources (both server resources and talent), and is an overkill for at least 90% of companies. Unless you’re Netflix, you’re probably better with a cloud-native container service (Amazon Elastic Container Service or Azure Container Service).

Another activity in the right-sizing category is managing the cloud auto-scaling rules for your services and servers. Monitor your cloud infrastructure continuously and keep the rules updated, so they match the actual profile of the demand. Pay attention to overutilization limits and overcharges.

Some tips in the “right tool for the task at hand” category include: 

  • Pick the right solution to store data. Cold storage (like Amazon Glacier) should only be used as… cold storage, for backup. Write once and (hopefully) don’t ever read. If you use it for writing and reading, it will get very expensive. 
  • Use object storage (Blob or S3) where possible, for storing photos, videos, other large files for example. It’s cheaper than using block (disk) storage, which needs to be used to store your server image. 
  • Leverage content delivery network (CDN, like CloudFront or Cloudflare) for serving your static websites. They are almost free then, because they are served from the CDN and local browser cache, so you don’t pay for bandwidth and your server doesn’t do any work either.

Optimize the code and architecture

Hardware engineers say that all the efficiency gains from Moore’s law will be wasted by the software engineers… Optimized code and architecture will probably save you the most money in the long term. The differences in cloud operating costs between a good and a bad code/architecture can be huge: we’ve seen a 2x — 5x change in costs.

This is a subject worth a separate article, if not a book, so just very briefly, here are some simple, actionable suggestions. First, use garbage collection in the software you develop to make sure the applications manage memory efficiently. Manage CPU processes on the servers — kill dead or idle processes. You’ll need less RAM and CPU to run your services, so it will cost less. Second, avoid deleting database entries. Delete and rewrite takes more transactions than update, so will cost you more in IO activity. Finally, if possible, move all your servers into the same zone. The traffic between them will be cheaper, hopefully free.

Stay flexible and automate

Automation is probably the second biggest cost saver. It mainly reduces the internal resources (time of the engineers) spent on infrastructure. The cost savings are not immediately visible in your cloud bills but are rather reflected in productivity gains (deployment speed, performance, reliability, etc).

Automating your IT infrastructure management means introducing scripts, rule-based actions, and applications to assist the developers, DevOps, or system engineers with their tasks and preferably eliminate routine, tedious tasks. The big cloud providers offer good management automation tools under the umbrella of Infrastructure as Code (IaC). These tools, like Amazon’s CloudFormation, do a great job handling infrastructure provisioning and management. A piece of code (instead of a human operator) can launch a server with a specific service at the right moment, execute for as long as it’s needed, and then kill it to conserve resources and save costs.

Containerization is another great way to increase flexibility and automation. Putting your software into containers means that they can be launched on any infrastructure and the software will work the same. The same container can be moved around different servers, VMs, or clouds. Containers also scale quickly and can be orchestrated/ managed automatically. All of this means more efficient use of resources = lower costs. And remember, Kubernetes is one of the container orchestration solutions, not a containerization technology. Docker (the most popular containerization technology) containers can be managed by many different orchestration solutions, including native cloud solutions, Kubernetes, and others (for example Docker Swarm).

Once you have containers and IaC in place, you can work on “flattening the curve” of your cloud consumption by moving routine and predictable tasks around. Backup, migration, reporting activities, or any processes you can control can be scheduled when there’s usually a dip in the demand for your services (late at night? over the weekend?), or at least not during the peak (don’t do anything extra on a black Friday if you’re running an e-commerce website…).

Finally, if possible, don’t use proprietary solutions. Instead of buying a branded database service from a big cloud provider, buy a vanilla server from them and launch an open-source database inside a container on this server. This will be cheaper and will give you much more flexibility and control over your infrastructure. You’ll be able to migrate between clouds easier and use multi-cloud infrastructure in a much more convenient way.

Part 2 — Practical tips

Here are some very specific, maybe obvious aspects of your cloud infrastructure that can be managed to help eliminate some of the IT costs immediately.

Pay attention to your bill and turn off what you don’t use

Go through your bill line by line and see if you are using everything you pay for. It sounds obvious, but we still find a surprising amount of unused VMs and services that no one is using. Someone may have created a test environment 6 months ago that no one is using, there’s an old static website running on a server, things like that. So, make a backup of anything that you can use in the future (like the test environment or the website) and kill the server/service. You’re always able to restore it from the backup if needed. This part also includes deleting unused IP addresses (some cloud providers charge you when you don’t use them).

Keep in mind that some auxiliary services are not going to be deleted when you delete the main service. If you delete a server pay attention to and manually get rid of things like disk, snapshot, static IP, or a Windows server license.

Consolidate services and use one larger server

One bigger server is cheaper than the sum total of small servers of the same total size. If you can, use one bigger server instead of several smaller ones separate for each service. This will work great especially for services that constantly underutilized the servers, like static websites. You can probably launch tens of static websites on one small server instead of using separate servers for each website.

Use reserved instances

On-demand pricing is great because you only pay for what you actually use. But, chances are there is a baseline, a minimum consumption level for your cloud infrastructure. It’s much cheaper to use the reserved instances to cover this baseload and only pay for on-demand “peaks”. Here’s a simple illustration of this concept. Additional savings can be generated if you pre-pay for the reserved cloud services.

Renegotiate prices

Every price is negotiable. The potential savings depend on your bargaining power, but I’m almost certain that every cloud provided will offer you some sort of a discount when you threaten to leave. I was once offered a 50% discount off a $3 server when I told them I want to close my account! It was a small hosting company, but even the big cloud providers are more flexible now. A few years back AWS was a de-facto monopoly, but right now Microsoft is breathing down their neck, Google is closing the distance and there’s a number of other formidable competitors with deep pockets (=aggressive pricing) including Alibaba Cloud, Oracle, IBM, and Tencent Cloud.

What usually works best is calling the competing cloud providers (if you’re with AWS, call Azure and GCP) and asking them for a quote for your current setup. Chances are it will be notably lower (20%-25%). Then take this quote and show it to your AWS guys, ask if they can match it. In most cases, they will offer you a discount and if you have the patience, you iterate to get the best results (i.e. take the new, discounted AWS prices to Azure and so on).


I hope you’ll be able to use some of these suggestions and lower your cloud bills. Please let us know (directly or on our Twitter) if you have any questions or would like to know more about a specific cloud cost-related subject. 

I’d like to thank my friend and Djuno co-founder Moe Sayadi for sharing his knowledge and experience about cloud infrastructure for the purpose of this post.


Djuno develops AI that helps you take back control over cloud costs.

Djuno AI is a light touch tool that predicts server utilization, identifies seasonality, and provides cost-saving tips and recommendations. It’s free and doesn’t require any registration.

You can check it out here: http://ai.https://blog.djuno.io/

Sign up for updates!

* indicates required

How Omni Cloud can prevent public cloud providers from destroying your assets

The impossible trilemma of IT infrastructure — © Djuno.io

Anti-capitalist public clouds

Cloud infrastructure is wonderful. It was the main facilitator in the process of “software eating the world”. And the benefits of this process are enormous: easier access to powerful infrastructure, more innovation, new products, and businesses exists that would have been impossible without clouds (some detect cancer and save lives, others let you fly over a virtual representation of your own house). So, from a technology point of view, clouds are wonderful.

From an individual business point of view, they are wonderful, until they aren’t… Let me explain.

The objective of every capitalist is to create capital (“human-created assets that can enhance one’s power to perform economically useful work”). Businesses sell products and earn margins over their fixed and variable costs of creating the products. Every business wants to have as little variable costs as possible because then their profits grow exponentially when they sell more products. During their lifecycle, businesses usually start with high variable costs and work to minimize them by investing in their own productive assets (capital) that can be reused over and over again and yield compound returns on investment. This is how capital is created. A very simple analogy is building your own house instead of renting.

Cloud providers offer variable infrastructure costs to companies, which is very attractive in the beginning. Low usage of the software = low infrastructure costs. But, when the business scales and serves a lot of users, the variable costs grow as fast as (and sometimes faster than) the utilization of the cloud infrastructure. Instead of creating capital, companies pay more and more in rent. Like a restaurant owner who can never get ahead of the game: the more successful the restaurant is, the faster her rent is increased by the landlord.

Cloud providers sell the story of “Focus on creating value for the users! Don’t worry about the infrastructure cost”. And it works great, for them. Just in Q2 2020, AWS generated a $3.36 B profit on a $10,81 B revenue. Not gross margin, but profit after all the costs, including R&D, marketing, taxes, interest, or depreciation! And they are in a capital intense, commodity business, where value-added services are provided using open-source software that anyone can install for free. How is this possible?

Power #4

Infrastructure migration is a very complicated process. This means it’s expensive: there are direct costs of migration, costs of the new infrastructure, costs of redundancies, operational risk, training, maybe recruiting new talent, or hiring outside help.

Unsurprisingly, switching costs are listed on the prominent 4th place in 7 Powers: The Foundations of Business Strategy by Hamilton Helmer. Cloud providers know this power very well and take full advantage of it.

First, they lure you in. Like a seasoned drug dealer in front of a high school, some will offer the first batch for free. You’ll get a hefty discount for the first 6–12 months. Startups will receive thousands of dollars’ worth of credits. Then, they’ll hook in the developers by offering shiny new toys: new tools, add-ons, dashboards. These toys are easy to deploy and use for the developer and, just like with every new toy, the kid doesn’t have to pay for it. The parents pay the bill! Or, in this case, anyone responsible for the IT budget. The “new toys” offered by different cloud companies are very similar. They are usually forks of the same open-source tools and products. But they are different enough, that they are not fully compatible, adding a lot of friction to the potential migration process. Now it’s not only switching from one cloud to another but also migrating your data, launching comparable services on a different platform, and also training/hiring new developers. Not to mention the siloed, redundant user identity and access management systems.

The latest iteration of this approach is the serverless architecture or Function as a Service. Again, from a technical point of view, it’s a very interesting innovation. As a business decision, it’s a highly questionable proposition: “Please develop software for us, for free, so that we can rent it back to you and charge whatever we want since migration is almost impossible”.

Impossible Trilemma

All of this creates “the impossible cloud trilemma”: at any given point in time you can at most have only 2 out of the following 3: a) flexibility and control, b) convenience, c) low costs. If you do all in-house, you get flexibility, control, and a low total cost of ownership. If you pick a small local provider you’ll probably get low costs. If you pick a trendy PaaS, you’ll only get convenience. Optimizing for flexibility and convenience requires something like GCP + Anthos, which is expensive and still doesn’t give you full control.

These are only the problems on the operational level, related to the cost structure and long term efficiency of capital allocation. But there are also important strategic issues to discuss. Since software has eaten the world, every company is a software company now. The core competitive advantages and processes manifest themselves in the form of software companies’ data acquisition, storage, and processing. Outsourcing these key functions to a very powerful monopolistic provider skews the balance of power and strategic positioning. Companies risk becoming “uber drivers” while the major cloud platforms become “the Uber platform”.

Finally, there is an open question of what happens with the data stored in a public cloud. How can companies know who has access to it and for what purpose? Obviously, high value, high profile (and government) cloud contracts are audited. But what about all the small businesses, startups? How many of these read T&Cs before signing up with a PaaS? (Amazon is already using data of their own 3rd party merchant to design in-house products to compete with them in the Amazon store).

Omni Cloud

So, is there a solution? And I mean a realistic, technology and market-driven, not a political, Elizabeth Warren style, solution. Here’s one interesting idea presented recently by David Linthicum, Chief Cloud Strategy Officer, Deloitte Consulting:

“In 2020, I believe we’ll see the rise of the “Omni Cloud,” or what multi-cloud will become. Basically, the abstraction above the physical public clouds, providing common ways to access storage, processing, databases, compute, and HPC. This will likely be more of an idea than an actual thing in 2020, but it will be game changing in terms of how we deal with complex heterogenous cloud deployments.”

By abstracting the infrastructure such solutions reduce cloud providers to the role of commodity providers as long as the Omni Cloud environment enables easy migration with near-zero costs and downtime. Companies could create their portable IT environment that becomes their asset that can yield compounded returns and can be moved anywhere, depending on the current best offers on the transparent cloud /hosting market. Companies could choose convenience, flexibility, control, and low costs at the same time.

With Omni Cloud, the company’s IT environment becomes an asset and every piece of code is a reusable building block that can generate compound yield in the future (vs. a liability of technical debt and escalating costs of disposable code and inefficient infrastructure).

Let us know @realDjuno what you think.

Sign up for updates!

* indicates required

Optimizing Cloud Costs in a Fast Growing Startup — Djuno Case Study

She’s just discovered how much money her startup can save with Djuno 😉
Photo by Christina @ wocintechchat.com on Unsplash 

Djuno helps companies get back control over cloud costs. In the process, we use both AI (Artificial Intelligence) and our natural intelligence.

One of the most common questions people ask is a variation of “what exactly are the cost savings” or “what are the average cost savings over a 5 year period”. It is one of these “how long is a piece of string” questions. It’s complex and depends on many factors, but we have a framework that can be used to estimate the savings and can provide some actual numbers for reference.

External cloud costs

Let’s start with real numbers. One of our clients, a fast growing, successful FinTech/ InsurTech startup used to pay low 5 figures per month (low 6 figures per year) out of pocket for external public cloud infrastructure. After our intervention, they are now paying around 30% less each month. But, we didn’t just turn off their servers. We redesigned the architecture, so they have now 2x more cloud compute and storage. So, on a “per unit” basis, the savings equal to some 65%. Over 5 years, the total cost savings amount to is a very significant, high 6 figures number and this will be even more evident when more cloud resources are needed, as the business grows.

In general, our conservative estimate is that companies can save around 30% of their cloud expenses. This is in line with recent research that confirms that about 30% of cloud spending is wasted [2020 Flexera state of the cloud report].

Internal resources

Additionally, companies can reduce their internal resources dedicated to managing infrastructure. A full time DevOps costs $100k / year and more. With Djuno, they can focus on developing instead of maintaining and patching the existing infrastructure. Djuno managed solutions (with our dedicated engineer) will cost about 30% of this (30k/year vs 100k) if a company wants to outsource the whole process. The bigger the infrastructure, the more DevOps or SysOps a company has, the more evident the savings are.

Opportunity costs and other business related items

Finally, there are costs that are difficult to quantify immediately, based on assumptions, but can be estimated on a case by case basis. These include the cost of technical debt (when a company doesn’t want to modernize because they don’t have any good migration solutions), costs of (non-)compliance (especially in financial services), the opportunity cost of utilizing all the data (due to the lack of tools to share it internally and externally) or cost of reporting and analytics where aggregation has to happen manually each time.

The above items have a profound impact on the whole business, especially when compounded over time. In general, the question is how much does it cost to have a business that is not flexible and has its culture and processes negatively impacted by obsolete IT architecture. With Djuno, the company’s IT environment becomes an asset and every piece of code is a reusable building block that can generate compound yield in the future (vs. a liability of technical debt and escalating costs of disposable code and inefficient infrastructure).

In summary: the cost savings are significant and the positive impact on the entire business is profound. You should try it! [BTW, we have a zero-risk, no commitment “return policy”: we deploy Djuno for you, estimate the impact, present you with the offer and if you don’t like it, we take Djuno back, no questions asked].

Sign up for updates!

* indicates required

The most important aspect of healthy company culture and how to create it

If you could just do this culture…

Psychological safety

No mystery or surprise here, no dramatic reveal: it has been well documented (like Project Aristotle: Google researchers studied 180 teams to find out the
components of highly effective teams)
 that the single most important aspect of a healthy, successful organization (a startup – a company – a team) is psychological safety. Having culture that encourages psychological safety means that all team members know it’s ok to take risks and to be vulnerable in front of each other. “If I make a mistake on our team, it is not held against me.”

For the record, the four other aspects of successful teams that showed up in the Google research are:

  • Dependability — “When my teammates say they’ll do something, they follow through with it.”
  • Structure and Clarity — “Our team has an effective decision-making process.”
  • Meaning — “The work I do for our team is meaningful to me.”
  • Impact — “I understand how our team’s work contributes to the organization’s goals.”

BUT, without psychological safety the other four don’t work and with psychological safety they usually take care of themselves.

Now, let’s not forget to define of organizational / company culture itself. My favourite definition, the most accurate based on my experience, is this: Culture is what people do when no one is watching and telling them what to do. For example, if a sales team gets an email, do they reply right away? Will they call a potential client, forward the message to someone else or just do nothing for a while?

Usually culture emerges organically or is impressed upon by the founder(s) and is managed, shaped or dealt with only when it becomes harmful or detrimental for a company (think Uber). But for us, as the founders of Djuno, setting a healthy culture was an imperative conscious choice from the very beginning. We’ve worked together before and we’ve experience the benefits of psychological safety inside our team. As we’re bootstrapping Djuno into existence, there is no budget for culture: no offsites, no trainers, no posters, no branded t-shirts or merch. We’re also all remote and spread across 3 continents.

Here’s what we did:

  1. We put together our “Rules of Engagement” document that defines our values, recommended and required behaviours and attitudes.
  2. We made sure everyone in the company has access to the document (it’s on our Confluence) and communicated that it’s a living document that should evolve as we evolve as a team (it’s in a permanent “live beta”, or “v0.9” state).
  3. Finally, and most importantly, we make sure every day that the founders adhere to these principles. People learn culture by observing how others behave, not by reading documents (especially if the example comes from the top). That’s it! We’re still figuring out things, learning to communicate and work with each other as our team grows. But the results have been encouraging so far in terms of both personal satisfaction and tangible deliverables (dev. time and velocity).

Here’s a copy of our “Rules of Engagement”. Maybe it will help or inspire you.

Djuno Rules of Engagement
V 0.9 (live Beta)

Intro

This is not our first rodeo. We’ve worked for many companies. We created our own companies. We had a lot of success and a lot of failures. We did a lot of things right and made a lot of mistakes.

One thing we’ve learned and all agree upon is that great companies don’t happen by accident, but are stubbornly created by great people. By this standard (the people), Djuno is already great company material and we want it to stay like this.

So, since we are a distributed, remote bunch, here are some ground rules for all of us operate under, our constitutions or “rules of engagement”. Everyone is encouraged to get to know this document, as it will make all of our lives easier.

This document will always be in “live Beta”, because we understand it will evolve with time. In a truly agile spirit, we think it’s better to release something useful, even if imperfect, as soon as possible, and improve as we go along.

OK, here are our rules / guidelines:

No slaves, all volunteers

We work on creating Djuno because we want to, not because we have to. We want to work with people who share this attitude, who care about the project or their team members or (ideally) both. If this is just a 9–5 job that you hate, please look for your passion elsewhere.

Equal opportunity for all

Life is not fair. People are different, have different talents and backgrounds. We are not pretending everyone is the same, but we make sure everyone has the same access to opportunities at Djuno. We don’t care about your nationality, religion, gender, age, formal education or anything else like that. We only care if you can create value. If you can, the sky is the limit. You’ll be given as much responsibility as you can handle and you’ll participate in the success we create (=money, in plain English…) proportionally to your contribution.

Also, to say it once, and get it out of the way: No gender, cultural, national, racial etc. biases and discrimination will be tolerated. Even for people who are brilliant, talented or in charge. We don’t work with assholes.

We’re a pack and we eat what we kill

This means we’re a startup, not a large enterprise or a government agency. We can only spend the money we earn. While not everybody hunts, we do not tolerate dead weight.

Anyone who wants to make more money should focus on how to create more value: bring more clients, eliminate inefficiencies, create better products, etc. As stated above, all valuable contributions will be noted and rewarded.
This also means we encourage people to make problems disappear: try solving it before escalating.

Communication is encouraged, noise isn’t

We prefer direct lines of communication. If there is something important you want to talk about with the CEO or anyone else, go for it! No need to ask anyone’s permission. However, the key word is something important. We don’t want to create noise, unnecessary emails or messages.

Everyone can make mistakes, but don’t make the same mistake twice

We all make mistakes, but it’s important to learn from them. No one will get punished for making an honest mistake, however 1) as soon as you discover the mistake you have to communicate it so we can think how to fix it. Hiding, covering up your errors or mistakes will not be tolerated. It’s bad to hear from your team that they screwed something up, but it’s infinitely worse to hear from your client that your team screwed something up… 2) don’t make the same mistake twice. This needs to be a learning opportunity. Repeating the same mistakes is dumb or malicious. Either way both are a problem.

Pride of ownership

We’re like the early pioneers on the western frontier. The opportunities are endless, but only if you want to stake your claim. This means you take ownership of your tasks and projects. Be proud of them, be proud of what you’re creating. If there are problems — solve them yourself if possible. This also means you won’t be controlled or closely supervised. (There are no factory workers and supervisors on the western frontier.) But pioneers are expected to be ethical and honest. If they break the rules or cheat, the sheriff will deal with them.

Finally, claiming your stake means being pro-active. Don’t wait for others to tell you what to do. If you see a task or an issue that needs solving — take it and run with it! Even if this means you’ll make a mistake or two. As mentioned above, no big deal as long you’ll fix your own mess and learn from the mistakes 😉

Be nice, expect nice

We prefer to have an efficient culture, where people speak their minds and can be direct. But working remotely has its challenges. One of the challenges is that people don’t interact face-to-face, they don’t see the body language or the tone of voice. Something that might have been a joke can be interpreted as something mean. So, 1) be nice and extra mindful of language or cultural differences 2) expect nice. Don’t take everything personally, give others the benefit of the doubt.

Opinions are not facts

Every opinion is equally valid and therefore equally useless. Facts on the other hand, can be valid or invalid. Valid facts are useful. Valid facts always win arguments vs opinions. The person who is right (has the most valid facts) wins the argument. Even if this person is not in charge (not a COO for example). You are entitled to your opinion, but don’t get offended or surprised when it loses an argument against a valid fact.

It is not our job to make anyone happy

Please don’t think that we don’t care about your happiness or, even worse, we want to make you unhappy ;-). Not at all! We just recognize that when a group of people interacts or has to agree on a goal or course of action, someone will be unhappy. And in business, we decide to focus on creating success and making money and not on coddling everyone and making sure they are happy all the time. And sometimes directness and efficiency are more important that small talk.

Think Big

“Thinking small is a self-fulfilling prophecy. Leaders create and communicate a bold direction that inspires results. They think differently and look around corners for ways to serve customers.” — this is a quote from Amazon.
We totally agree and just want to add that while thinking big, great leaders get shit done on a daily basis.

The End.

Let us know @realDjuno what you think.

Sign up for updates!

* indicates required

Dockerize R, expose as API

by Moe, CEO

A few month ago, there was a requirement to use some R lang code in production. Porting it to other languages (c#) was not only time consuming but not reliable to be used in production, because we were worried new code may generate different result. Here is my solution, hope it helps you as it did help me.

#plumberapi #rlanguage #dockerfile

dockerize_R_expose_as_api

imagine you have an peice of code developed by your data scientist/ Mathematician/Statistician or whoever else that is happy to work with R, now your devs are required to plug in that peice of code to your project. i do not need to imagine it cuz it happened for me and this is how i solved it

step 0

lets start from creating a folder for this project

$ mkdir exposemyRcodeasAPI
$ cd  exposemyRcodeasAPI/

create a text file name it plumber.r

$ touch plumber.r 

then open plumber file with your text editor

$ nano plumber.r 

step 1

add following to head of your file

library(fitdistrplus)
library(scales)
library(actuar)
library(plumber)

step 2

source the R code (lets call it func.R)

source("funcs.R")

step 3

at run time your plumber file will generate swagger docs by adding following line you will set your apititle (append following text to plumber.R file after sourcing the R code)

#* @apiTitle yourapinamehere

step 4

wrapp whichever function that you want to expose in a function with following declaration

#* Echo back the input
#* @param msg The message to echo
#* @get /echo
function(msg = "") {
    somefuncInFuncdotRFile(msg)
}

if you want to have post method use (#* @post instead of get) to read more click [https://www.rplumber.io/][HERE]

step 5

now everything should be fine, you can dockerise your code and ship it anywhere you want

$ touch Dockerfile

append following lines

FROM trestletech/plumber

#to install packages uncoment following line
#RUN R -e "install.packages('whatever it is used in the code')"


#for apt-get uncomment & modify following line 
#RUN apt-get update -qq && apt-get install whatever is needed 

#expose a port 
EXPOSE 8000

#make a folder and copy your files 
RUN mkdir /app
COPY funcs.R /app
COPY plumber.R /app
WORKDIR /app

#set entrypoint
ENTRYPOINT ["R", "-e", "pr <- plumber::plumb('plumber.R'); pr$run(host='0.0.0.0', port=8000,swagger=TRUE)"]

build your docker and forward port 8000 from your container to any port you want on host

Sign up for updates!

* indicates required