Sunday, November 6, 2016

Consul - Micro Services Discovery

Consul Service Discovery

Consul is a HashiCorp tool for service discovery, service registry, and health checks.
Service Discovery and Configuration made easy, distributed, highly available and data center aware.

  1. Service Discovery:Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. Register external services such as SaaS providers as well.
    1. DNS Query Interface: Look up services using Consul's built-int DNS server. Support existing infrastructure without any code change.
      1. $ dig web-frontend.service.consul. ANY
  2. Failure Detection: Pairing service discovery with health checking prevents routing requests to unhealthy hosts and enables services to easily provide circuit breakers.
  3. Multi Datacenter: Consul scales to multiple datacenters out of the box with no compilcated configuration. lookup services in other datacenters, or keep the request local.
  4. Key-value storage: Flexible key/value store for dynamic configuration, feature flagging, coordination, leader election and more, Long poll for near-instant notification of configuration changes.
    1. Consul provide a hierarchy Key/Value store with a simple http API. Managing configuration has never been simpler.
      1. $ consul kv put foo bar
      2. $ consul kv get foo
      3. $ consul kv get -detailed foo
      4. $ consul kv delete foo
Consul is designed to be friendly to both the DevOps community and application developers, making it perfect for modern, elastic infrastructure.

Architecture

Every node that provides services to Consul runs a Consul agent. running an agent is not required for discovering other services or getting / setting key/value data. The agent is responsible for health checking the services on the node as well as the node itself.
The agent talk to one or more Consul Servers. The Consul server are where data is stored and replicated. The servers themselves elect a leader. While Consul can function with one server., 3 or 5  (2n +1) is recommended to avoid failure scenarios leading to data loss. A cluster of Consul servers is recommended for each data-center.

Components of your infrastructure that need to discover other services or nodes can query any of the Consul servers or any of the Consul agents. The agents forward queries to the servers automatically.

Each data-center runs a cluster of Consul servers. When a cross-data-center service discovery or configuration request is made, the local Consul servers forward the request to the remote data-center and return the result.

Glossary

  • Agent: An agent is the long running daemon on every member of the Consul cluster. It is started by running $ consul agent. The agent is able to run in either client or server mode. Since all nodes must be running an agent., it is simpler to refer to the node as being either client or server, but there are other instances of the agent. All agents can run the DNS or HTTP interfaces and are responsible for running checks and keeping services in sync.
  • Client: A client is an agent that forwards all RPCs to a server. The client is relatively stateless. The only background activity a client performs is taking part in the LAN gossip pool. This has a minimal resource overhead and consumes only a small amount of network bandwidth.
  • Server: A server is an agent with an expanded set of responsibility including participating in the Raft quorum. maintaining cluster state, responding to RPC queries, exchanging WAN gossip with other data-centers and forwarding queries to leaders or remote data-centers.
  • Data Center: While the definition of a data-center seems obvious, there are subtle details that must be considered. For example, in EC2, are multiple availability zones considered to comprise a single data-center? We define a data-center to be a networking environment that is private, low latency, and high bandwidth. This excludes communication that would traverse the public internet, but for our purposes multiple availability zones within a single EC2 region would be considered part of a single data-center.
  • Consensus: Agreement upon the elected leader as well as agreement on the ordering of transactions. Since these transactions are applied to a finite-state machine, our defintion of consensus implies the consistency of a replicated state machine.
  • Gossip: Gossip is built on top of Serf which provides a full gossip protocol that is used for multiple purposes. Serf provides membership, failure detection, and event broadcasting. It's enough to know that gossip involves random node-to-node communication, primarily over UDP.
  • LAN Gossip: Refer to the LAN gossip pool which contains nodes that are all located on the same local area network or data-center.
  • WAN Gossip: Refer to WAN gossip pool which contains only servers. These servers are primarily located in different data-centers and typically communicate over the internet or wide area network.
  • RPC: Remote procedure call. This is a request / response mechanism allowing a client to make a request to server.
High Level Architecture Diagram
High Level Architecture Diagram

Let's break this image and describes each piece. First of all, we can see that there are 2 datacenters, labeled "one" and "two". Consul has first class support for multiple data-centers and expects this to be the common case.
Within each data-center, we have mixture of clients and servers. It is expected that there be between 3 to 5 servers. This strikes a balance between availability in the case of failure and performance, as consensus gets progressively slower as more machines are added. However, there is no limit to the number of clients, and they can easily scale into the thousands or tens of thousands.
All the nodes that are in a data-center participate in a gossip protocol. This means there is a gossip pool that contains all the nodes for a given datacenter. This servers a few purposes:
  1.  there is no need to configure clients with the addresses of servers, discovery is done automatically.
  2. The work of detecting node failures is not placed on the servers but is distributed, this makes failure detection much more scalable that naive heartbeating schemes.
  3. It is used as a messaging layer to notify when important events such as leader election take place.
The servers in each datacenter are all part of a single Raft peer set. This means that they work together to elect a single leader, a selected server which has extra duties. This leader is responsible for processing all queries and transactions. Transactions must also be replicated to all peers as part of consensus protocol. Because of this requirement, when a non-leader server receives an RPC request, it forwards it to the cluster leader.

The server nodes also operates as part of a WAN gossip pool. This pool is different from the LAN pool as it is optimized for the higher latency of the internet and is expected to contain only other Consul server nodes. The purpose of this pool is to allow data-centers to discover each other in a low-touch manner. Bringing a new data-center online is as easy as joining the existing WAN gossip. Because the servers are all operating in this pool, it also enables cross-datacenter requests. When a server receives a request for a different data-center, it forwards it to a random server in the correct data-center. That server may then forward to the local leader.

"This result in a very low coupling between data-centers, but because of failure detection, connection caching and multiplexing , cross-data-center requests are relatively fast and reliable."


CONSUL Commands (CLI)

Consul is controlled via  a very easy to use command-line interface (CLI). Consul is only a single command-line application: consul. This application then takes a subcommand such as "agent" or "members". It also responds to -h and --help, to get help for any specific command, pass the -h flag to the relevant subcommand, for example to see help about the join subcommand:
    $ consul join -h

Consul Setup

Consul Service Discovery tool example:

   # Run these on all consul servers.
   $sudo su
   $ apt-get update
   $ apt-get install git unzip -y
   $ cd /root
   $ wget https://dl.bintray.com/mitchellh/consul/0.5.2_linux_amd64.zip
   $ unzip 0.5.2_linux_amd64.zip
   $ rm -f 0.5.2_linux_amd64.zip
   $ mv consul /usr/bin/
   $ git clone https://github.com/anupkumardixit/consul_demo.git


   # Bootstrap / Web UI Server
   ---------------------------
   $ wget https://dl.bintray.com/mitchellh/consul/0.5.2_web_ui.zip
   $ unzip 0.5.2_web_ui.zip
   $ rm -f 0.5.2_web_ui.zip
   $ cd /root/consul_demo
   $ cp bootstrap.json config.json

   # Save this keygen! Note, if your key has a slash in it you need to escape them for setup.sh. Or just regenerate one until it doesn't have a slash :)
consul keygen

   # Non Boostrap Consul Server
   ---------------------------
   $ cd /root/consul_demo
   $ cp server.json config.json

  # Consul Agent Server
   ---------------------------
   $ apt-get install apache2 -y
   $ cd /root/consul_demo
   $ cp agent.json config.json

   $ ./setup.sh HOSTNAME ENCRYPT_KEY IP_OF_BOOTSTRAP IP_NON_BOOTSTRAP
nohup consul agent -config-dir /root/consul_demo/config.json &


# Now lets test on our agent server.
   $ curl -X PUT -d 'test' http://localhost:8500/v1/kv/web/key1
   $ curl http://localhost:8500/v1/kv/web/key1
   $ curl http://localhost:8500/v1/kv/web/key1?raw











Monday, October 3, 2016

Java 8 - Writing asynchronous code with CompletableFuture


You probably already know about Futures

A Future represents the pending result of an asynchronous computation. It offers a method — get — that returns the result of the computation when it's done.

The problem is that a call to get is blocking until the computation is done. This is quite restrictive and can quickly make the asynchronous computation pointless.

Sure — you can keep coding all scenarios into the job you're sending to the executor, but why should you have to worry about all the plumbing around the logic you really care about?

This is where CompletableFuture saves the day

Beside implementing the Future interface, CompletableFuture also implements the CompletionStage interface.

A CompletionStage is a promise. It promises that the computation eventually will be done.

The great thing about the CompletionStage is that it offers a vast selection of methods that let you attach callbacks that will be executed on completion.

This way we can build systems in a non-blocking fashion.

public <U,V> CompletableFuture<V> thenCombineAsync(CompletableFuture<? extends U> other, BiFunction<? super T,? super U,? extends V> fn, Executor executor)

The simplest asynchronous computation

Let's start with the absolute basics — creating a simple asynchronous computation.
CompletableFuture.supplyAsync(this::sendMsg);

It's as easy as that.
supplyAsync takes a Supplier containing the code we want to execute asynchronously — in our case the sendMsg method.

If you've worked a bit with Futures in the past, you may wonder where the Executor went. If you want to, you can still define it as a second argument. However, if you leave it out it will be submitted to the ForkJoinPool.commonPool().

Methods that do not take an Executor as an argument but end with ...Async will use ForkJoinPool.commonPool() (global, general purpose pool introduces in JDK 8).

This applies to most methods in CompletableFuture class. runAsync() is simple to understand, notice that it takes Runnable, therefore it returns CompletableFuture<Void> as Runnable doesn't return anything. If you need to process something asynchronously and return result, use Supplier<U>:

final CompletableFuture<String> future = CompletableFuture.supplyAsync(new Supplier<String>() {
    @Override
    public String get() {
        //...long running...
        return "42";
    }
}, executor);

lambdas:
final CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
    //...long running...
    return "42";
}, executor);

methods:
final CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> longRunningTask(params), executor);

Creating and obtaining CompletableFuture

Factory Methods:
   static <U> CompletableFuture<U> supplyAsync(Supplier<U> supplier);
   static <U> CompletableFuture<U> supplyAsync(Supplier<U> supplier, Executor executor);
   static CompletableFuture<Void> runAsync(Runnable runnable);
   static CompletableFuture<Void> runAsync(Runnable runnable, Executor executor);

Attaching a callback

Our first asynchronous task is done. Let's add a callback to it!
The beauty of a callback is that we can say what should happen when an asynchronous computation is done without waiting around for the result.
In the first example, we simply sent a message asynchronously by executing sendMsg in its own thread.
Now let's add a callback where we notify about how the sending of the message went.

CompletableFuture.supplyAsync(this::sendMsg).thenAccept(this::notify);
thenAccept is one of many ways to add a callback. It takes a Consumer — in our case notify — which handles the result of the preceding computation when it's done.

Chaining multiple callbacks

Transforming and acting on one CompletableFuture (thenApply):

<U> CompletableFuture<U> thenApply(Function<? super T,? extends U> fn);
<U> CompletableFuture<U> thenApplyAsync(Function<? super T,? extends U> fn);
<U> CompletableFuture<U> thenApplyAsync(Function<? super T,? extends U> fn, Executor executor);

CompletableFuture<String> f1 = //...
CompletableFuture<Integer> f2 = f1.thenApply(Integer::parseInt);
CompletableFuture<Double> f3 = f2.thenApply(r -> r * r * Math.PI);

lambdas:
   CompletableFuture<Double> f3 = f1.thenApply(Integer::parseInt).thenApply(r -> r * r * Math.PI);

Running code on completion (thenAccept/thenRun)

CompletableFuture<Void> thenAccept(Consumer<? super T> block);
CompletableFuture<Void> thenRun(Runnable action);

   future.thenAcceptAsync(dbl -> log.debug("Result: {}", dbl), executor);
   log.debug("Continuing");
...Async variants are available as well for both methods, with implicit and explicit executor. I can't emphasize this enough: thenAccept()/thenRun() methods do not block (even without explicit executor). Treat them like an event listener/handler that you attach to a future and that will execute some time in the future. "Continuing" message will appear immediately, even if future is not even close to completion.

Combining two CompletableFuture together

Combining (chaining) two futures (thenCompose())
<U> CompletableFuture<U> thenCompose(Function<? super T,CompletableFuture<U>> fn);

CompletableFuture<Document> docFuture = //...


CompletableFuture<CompletableFuture<Double>> f = docFuture.thenApply(this::calculateRelevance);
CompletableFuture<Double> relevanceFuture = docFuture.thenCompose(this::calculateRelevance);
//...
private CompletableFuture<Double> calculateRelevance(Document doc)  //...

Transforming values of two futures (thenCombine())

<U,V> CompletableFuture<V> thenCombine(CompletableFuture<? extends U> other, BiFunction<? super T,? super U,? extends V> fn)

CompletableFuture<Customer> customerFuture = loadCustomerDetails(123);
CompletableFuture<Shop> shopFuture = closestShop();
CompletableFuture<Route> routeFuture = customerFuture.thenCombine(shopFuture, (cust, shop) -> findRoute(cust, shop));
//...
private Route findRoute(Customer customer, Shop shop) //...

Notice that in Java 8 you can replace (cust, shop) -> findRoute(cust, shop) with simple this::findRoute method reference:
    customerFuture.thenCombine(shopFuture, this::findRoute);

If you want to continue passing values from one callback to another, thenAccept won't cut it since Consumer doesn't return anything.
To keep passing values, you can simply use thenApply instead.
thenApply takes a Function which accepts a value, but also return one.

To see how this works, let's extend our previous example by first finding a receiver.

CompletableFuture.supplyAsync(this::findReceiver).thenApply(this::sendMsg).thenAccept(this::notify);

Now the asynchronous task will first find a receiver, then send a message to the receiver before it passes the result on to the last callback to notify.

Building asynchronous systems

When building bigger asynchronous systems, things work a bit differently.

You'll usually want to compose new pieces of code based on smaller pieces of code. Each of these pieces would typically be asynchronous — in our case returning CompletionStages.

Until now, sendMsg has been a normal blocking function. Let's now assume that we got a sendMsgAsync method that returns a CompletionStage.

If we kept using thenApply to compose the example above, we would end up with nested CompletionStages.

// Returns type CompletionStage<CompletionStage<String>>
CompletableFuture.supplyAsync(this::findReceiver).thenApply(this::sendMsgAsync);

We don't want that, so instead we can use thenCompose which allows us to give a Function that returns a CompletionStage. This will have a flattening effect like a flatMap.

// Returns type CompletionStage<String>
CompletableFuture.supplyAsync(this::findReceiver).thenCompose(this::sendMsgAsync);

This way we can keep composing new functions without losing the one layered CompletionStage.

Callback as a separate task using the async suffix

Until now all our callbacks have been executed on the same thread as their predecessor.

If you want to, you can submit the callback to the ForkJoinPool.commonPool() independently instead of using the same thread as the predecessor. This is done by using the async suffix version of the methods CompletionStage offers.

Let's say we want to send two messages at one go to the same receiver.
CompletableFuture<String> receiver  = CompletableFuture.supplyAsync(this::findReceiver);
receiver.thenApply(this::sendMsg);
receiver.thenApply(this::sendOtherMsg);

In the example above, everything will be executed on the same thread. This results in the last message waiting for the first message to complete.

Now consider this code instead.
CompletableFuture<String> receiver = CompletableFuture.supplyAsync(this::findReceiver);
receiver.thenApplyAsync(this::sendMsg);
receiver.thenApplyAsync(this::sendMsg);

By using the async suffix, each message is submitted as separate tasks to the ForkJoinPool.commonPool(). This results in both the sendMsg callbacks being executed when the preceding calculation is done.
The key is — the asynchronous version can be convenient when you have several callbacks dependent on the same computation.

What to do when it all goes wrong

As you know, bad things can happen. And if you've worked with Future before, you know how bad it could get.
Luckily CompletableFuture has a nice way of handling this, using exceptionally.

CompletableFuture.supplyAsync(this::failingMsg).exceptionally(ex -> new Result(Status.FAILED)).thenAccept(this::notify);

exceptionally gives us a chance to recover by taking an alternative function that will be executed if preceding calculation fails with an exception.
This way succeeding callbacks can continue with the alternative result as input.
If you need more flexibility, check out whenComplete and handle for more ways of handling errors.

Error handling of single CompletableFuture

CompletableFuture<Integer> safe = future.handle((ok, ex) -> {
    if (ok != null) {
        return Integer.parseInt(ok);
    } else {
        log.warn("Problem", ex);
        return -1;
    }
});

lambdas:
CompletableFuture<String> safe = future.exceptionally(ex -> "We have a problem: " + ex.getMessage());

handle() is called always, with either result or exception argument being not-null. This is a one-stop catch-all strategy.

Callback depending on multiple computations

Sometimes it would be really helpful to be able to create a callback that is dependent on the result of two computations. This is where thenCombine becomes handy.

thenCombine allows us to register a BiFunction callback depending on the result of two CompletionStages.

To see how this is done, let’s in addition to finding a receiver also execute the heavy job of creating some content before sending a message.

CompletableFuture<String> to = CompletableFuture.supplyAsync(this::findReceiver);
CompletableFuture<String> text = CompletableFuture.supplyAsync(this::createContent);

to.thenCombine(text, this::sendMsg);

First, we've started two asynchronous jobs — finding a receiver and creating some content. Then we use thenCombine to say what we want to do with the result of these two computations by defining our BiFunction.

It's worth mentioning that there is another variant of thenCombine as well — called runAfterBoth. This version takes a Runnable not caring about the actual values of the preceding computation — only that they're actually done.

Callback dependent on one or the other

Ok, so we've now covered the scenario where you depend on two computations. Now, what about when you just need the result of one of them?

Let’s say you have two sources of finding a receiver. You’ll ask both, but will be happy with the first one returning with a result.

CompletableFuture<String> firstSource = CompletableFuture.supplyAsync(this::findByFirstSource);
CompletableFuture<String> secondSource = CompletableFuture.supplyAsync(this::findBySecondSource);

firstSource.acceptEither(secondSource, this::sendMsg);

As you can see, it's solved easily by acceptEither taking the two awaiting calculations and a Function that will be executed with the result of the first one to return.

Waiting for both CompletableFutures to complete

<U> CompletableFuture<Void> thenAcceptBoth(CompletableFuture<? extends U> other, BiConsumer<? super T,? super U> block)
CompletableFuture<Void> runAfterBoth(CompletableFuture<?> other, Runnable action)

Imagine that in the example above, instead of producing new CompletableFuture<Route> you simply want send some event or refresh GUI immediately. This can be easily achieved with thenAcceptBoth():
customerFuture.thenAcceptBoth(shopFuture, (cust, shop) -> {
    final Route route = findRoute(cust, shop);
    //refresh GUI with route
});

I hope I'm wrong but maybe some of you are asking themselves a question: why can't I simply block on these two futures? Like here:
Future<Customer> customerFuture = loadCustomerDetails(123);
Future<Shop> shopFuture = closestShop();
findRoute(customerFuture.get(), shopFuture.get());

Waiting for first CompletableFuture to complete

Another interesting part of the CompletableFuture API is the ability to wait for first (as opposed to all) completed future. This can come handy when you have two tasks yielding result of the same type and you only care about response time, not which task resulted first. API methods (...Async variations are available as well):
 CompletableFuture<Void> acceptEither(CompletableFuture<? extends T> other, Consumer<? super T> block)
 CompletableFuture<Void> runAfterEither(CompletableFuture<?> other, Runnable action)

CompletableFuture<String> fast = fetchFast();
CompletableFuture<String> predictable = fetchPredictably();
fast.acceptEither(predictable, s -> {
    System.out.println("Result: " + s);
});

Transforming first completed

<U> CompletableFuture<U> applyToEither(CompletableFuture<? extends T> other, Function<? super T,U> fn)
   CompletableFuture<String> fast = fetchFast();
   CompletableFuture<String> predictable = fetchPredictably();
   CompletableFuture<String> firstDone = fast.applyToEither(predictable, Function.<String>identity());

Combining multiple CompletableFuture together

static CompletableFuture<Void> allOf(CompletableFuture<?>... cfs)
static CompletableFuture<Object> anyOf(CompletableFuture<?>... cfs)

allOf() takes an array of futures and returns a future that completes when all of the underlying futures are completed (barrier waiting for all). anyOf() on the other hand will wait only for the fastest of the underlying futures.


Linux Utilities


Shellinabox

Reference Links:
https://pkgs.org/download/shellinabox
http://linoxide.com/tools/web-remote-your-server/

$ wget http://dl.fedoraproject.org/pub/epel/7/x86_64/s/shellinabox-2.19-1.el7.x86_64.rpm
$ service shellinaboxd start
$ sudo netstat -nap | grep shellinabox
$ netstat -nap | grep shellinabox


Subscription Manager Workaround:

[root@sv2lxmkpdi10 ~]# yum install shellinabox
Loaded plugins: langpacks, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
There are no enabled repos.
 Run "yum repolist all" to see the repos you have.
 You can enable repos with yum-config-manager --enable <repo>
[root@sv2lxmkpdi10 ~]#