Friday, February 13, 2015

MicroXchg 2015 Berlin

TL;DR - Great location, well organized, relevant topics and speakers. Microservices concept opportunistically takes advantage of unprecedented computing power and availability for addressing old and well-known architecture problems (decoupling, fault-tolerance and adaptability).

Okay, you are interested. Let's go to the (very opinionated) details ...

The vast majority of the audience was local (German) and, not to my surprise, my old and once functional german vocabulary failed me very badly. Fortunately, it was very easy to follow the english track of presentations and I suspect the relevant parts of the german talks were revisited (many times) on the talks I was. Kudos to the microXchg 2015 team for the event organization and I hope we can meet again next year.

It started with an interesting talk from James Lewis sharing experiences with microservice adoption in a some of his consultancy costumers. It was interesting to see that what convinces CTOs (and CEOs) about microservices is the financial impact of not being able to keep their software evolving with their business, not specifically the direct changing-cost itself. As the time pass, monoliths become harder and harder to fix, adapt, improve and even maintaining. So, the hope to avoid exactly the same scenario very soon is what drives people to try something different in software development, i.e., the cost & risks to maintain the current system and the lost business opportunities are high enough to encourage a change.

Often the transition to microservices involves organizational changes, initiatives to decentralize decisions, more cohesive business models, agile methodologies, continuous delivery and small development teams. Which makes microservices more like a consequence of those changes than a methodology or technique per se. This could be interpreted as a manifestation of Conway's law, where a system resembles its target organization and vice-versa.

It's very likely that the complete transition from a traditional legacy system (think banking, insurance, e-commerce, etc) will take much longer than you first expect or maybe doesn't even complete in a timely period (let's say 2 or 3 years). In order to sustain the long changing period (financially and politically) it has to show quick results, so has to start small, solve isolated problems, this way we can simultaneously build trust among stakeholders and confidence on the teams applying the concept.

Very nice opening, I might say.

The following talk, from Richard Rodger was about the importance of relevant metrics in a microservice environment and how to achieve effective visibility modelling and monitoring system invariants (by combining functional metrics like: # of checkout / # of invoices in e-commerce, anything different than ratio 1 means something is wrong).

Chris Richardson, described the concept of Event-Driven (micro) services, where the design focus is reacting to system events (user actions) and persisting status changes. Continuous delivering with Docker was firstly mentioned and become a pervasive subject on the subsequent talks. Interesting, really, but got very biased with Scala and Spring-Boot (IMHO).

Actually, I was already prepared to a storm of java, scala, clojure, among other crack-of-the-day languages, at the end, microservices seems to accommodate niche (obviously not talking about the omnipresent and enterprise lingua franca java) language/framework diversity very nicely. "There is always one best tool for each job", a non-hammer mantra.

Anyway, there were plenty talks that required heavy java-buzzword filtering, not because they were not relevant, but mainly because I am ignorant about the subject. Absolutely no offence meant to the speakers. And I must recognize that many daily-used python chunks were heavily java-inspired (logging comes to mind).

Fred George presented microservices challenges and how empowering developer teams in respect to their corresponding products translates to higher velocity (faster results == smaller costs). Microservices becomes one of the means to achieve "development anarchy", giving to developers complete control and responsibility about their software.

A panel session closed the day, unasked questions and new answers, more docker, microservices experiences, etc ...

Second day started with Stefan Tilkov presenting Self-Contained-Systems (SCS) approach to microservices, with great insights about keeping entity as simple as possible (but not simpler), avoiding unnecessary complexity developing and integrating services and mainly focus on clear and stable APIs/User-Interfaces (complete ui-logic-data services)

Adrian Cockcroft presented the state of the art in microservices, obviously illustrated with his breakthrough experiences in Netflix but also other success stories like GILT and WalmartLabs, and the new problems emerged from heterogeneous and highly dynamic systems. Additionally he showed a real example of developer empowerment for deciding when and what to release and its counter-side which is getting PagerDuty notification when his software behaves badly in production; i.e. with power comes responsibility and also an immense peer-pressure to get and keep products running in the best possible way.

Sam Newman, followed with a talk about microservices essentials and the importance of having well isolated, automated and instrumented (functional metrics, aggregated logs, etc) units to be able to analyze and react to unpredictable scenarios happening in production. The ability to deploy microservices individually is key to success and drives many other degrees of development freedom (technology-agnostic, higher development speed, lower maintenance cost, etc) also leads to decentralized data management, while requires additional automation for deployment and self-healing (circuit-breakers)

The conference ended with a very good (one of the best IMO) talk from Chad Fowler describing the transition from the Monolitic Ruby+RDBMS  Wunderlist 2 to the microservices-based current system. It was very interesting to know the real experience (and pain) related to maintaining a system in terminal-state, because it's still the company core business, while creating a new one. There are ways to benefit both sides of this equation, not repeating the same errors while solving each isolated problem.

Chad described how hard was selling the "rewrite" to stakeholders and it was only possible because it was the only viable alternative to the company and stakeholder investments. Really critical scenario when you think about the personal impact on everyone involved, increasing even more the pressure for success of the new approach (not always the best adoption scenario) and adding to a lots of skepticism about performance and stability of the new system. The action taken by the development team to address generalized concerns was to expose the new system to a series of catastrophic scenarios (high load, broken pieces of infrastructure, etc) as much as possible. This way many problems were discovered, analyzed and solved very early causing much less impact (and costs).

Another interesting fact was that the initial microservice-based system was also entirely written in Ruby, what developer were very familiar with, the "rewrite" was entirely about a new architecture. Later on, analyzing requirements of each component, other languages and frameworks were experimented on each individual service and only the successful (and convenient at time) ones were replaced. As mentioned above, the search for "the best tool for the job", in fact,  never ends, there will be always a new optimized language out there that could solve one of your specific problems in a better way (mostly commonly faster).

That was it ...

I really enjoyed the talks, the design is indeed more important than the technologies used to implement them. Having each unit autonomous and small means that replacing any of them whenever they are not ideal anymore will be always cheap.

There seems to be a lot of innovative, concrete and honest efforts surrounding the microservices concept to move it way beyond the early 21th century empty-buzzworlds hall and actually help software designers to strive in this whole new (and wild) world, where speed is getting increasingly more important than anything else.

Monday, July 20, 2009

Launchpad API for PPAs - part 3

Continuing with Launchpad API for PPA, let's illustrate how to copy specific sources from one archive to another.

An authenticated user may copy sources (including their binaries or not) from any public archive to any PPA he has permission to upload using syncSource

One practical example is backporting recent SRUs to LTS series using your PPAs. Let say, we want the latest libvirt version available for testing in your Hardy instance.

{{{
ubuntu = lp.distributions['ubuntu']
primary, partner = ubuntu.archives
ppa = lp.me.getPPAByName(name='ppa')
ppa.syncSource(
source_name='libvirt', version='0.6.1-0ubuntu5.1',
from_archive=primary, include_binaries=False,
to_series='hardy', to_pocket='Release')
}}}

libvirt - 0.6.1-0ubuntu5.1 will be rebuilt in your PPA for hardy and if everything is compatible in few minutes you will be able to use and share it with other users.