Hazelcast C++ Client
Hazelcast C++ Client Library
hazelcast::client::pipelining< E > Class Template Reference

@Beta More...

#include <pipelining.h>

+ Inheritance diagram for hazelcast::client::pipelining< E >:

Public Member Functions

std::vector< boost::optional< E > > results ()
 Returns the results. More...
 
void add (boost::future< boost::optional< E >> future)
 Adds a future to this Pipelining or blocks until there is capacity to add the future to the Pipelining. More...
 

Static Public Member Functions

static std::shared_ptr< pipeliningcreate (int depth)
 Creates a Pipelining with the given depth. More...
 

Detailed Description

template<typename E>
class hazelcast::client::pipelining< E >

@Beta

The Pipelining can be used to speed up requests. It is build on top of asynchronous requests like e.g. IMap#getAsync(const K&) or any other asynchronous call.

The main purpose of the Pipelining is to control the number of concurrent requests when using asynchronous invocations. This can be done by setting the depth using the constructor. So you could set the depth to e.g 100 and do 1000 calls. That means that at any given moment, there will only be 100 concurrent requests.

It depends on the situation what the optimal depth (number of invocations in flight) should be. If it is too high, you can run into memory related problems. If it is too low, it will provide little or no performance advantage at all. In most cases a Pipelining and a few hundred map/cache puts/gets should not lead to any problems. For testing purposes we frequently have a Pipelining of 1000 or more concurrent requests to be able to saturate the system.

The Pipelining can't be used for transaction purposes. So you can't create a Pipelining, add a set of asynchronous request and then not call results() to prevent executing these requests. Invocations can be executed before the results() is called.

The Pipelining isn't threadsafe. So only a single thread should add requests to the Pipelining and wait for results.

Currently all ICompletableFuture and their responses are stored in the Pipelining. So be careful executing a huge number of request with a single Pipelining because it can lead to a huge memory bubble. In this cases it is better to periodically, after waiting for completion, to replace the Pipelining by a new one. In the future we might provide this as an out of the box experience, but currently we do not.

A Pipelining provides its own backpressure on the system. So there will not be more in flight invocations than the depth of the Pipelining. This means that the Pipelining will work fine when backpressure on the client/member is disabled (default). Also when it is enabled it will work fine, but keep in mind that the number of concurrent invocations in the Pipelining could be lower than the configured number of invocation of the Pipelining because the backpressure on the client/member is leading.

The Pipelining has been marked as Beta since we need to see how the API needs to evolve. But there is no problem using it in production. We use similar techniques to achieve high performance.

Parameters
<E>

Definition at line 76 of file pipelining.h.

Member Function Documentation

◆ add()

template<typename E >
void hazelcast::client::pipelining< E >::add ( boost::future< boost::optional< E >>  future)
inline

Adds a future to this Pipelining or blocks until there is capacity to add the future to the Pipelining.

This call blocks until there is space in the Pipelining, but it doesn't mean that the invocation that returned the ICompletableFuture got blocked.

Parameters
futurethe future to add.
Exceptions
null_pointerif future is null.

Definition at line 124 of file pipelining.h.

124  {
125  down();
126 
127  futures_.push_back(future.share());
128  }

◆ create()

template<typename E >
static std::shared_ptr<pipelining> hazelcast::client::pipelining< E >::create ( int  depth)
inlinestatic

Creates a Pipelining with the given depth.

We use this factory create method and hide the constructor from the user, because the private up() and down() methods of this class may be called from the executor thread, and we need to make sure that the Pipelining instance is not destructed when these methods are accessed.

Parameters
depththe maximum number of concurrent calls allowed in this Pipelining.
Exceptions
illegal_argumentif depth smaller than 1. But if you use depth 1, it means that every call is sync and you will not benefit from pipelining at all.

Definition at line 89 of file pipelining.h.

89  {
90  util::Preconditions::check_positive(depth, "depth must be positive");
91 
92  return std::shared_ptr<pipelining>(new pipelining(depth));
93  }

◆ results()

template<typename E >
std::vector<boost::optional<E> > hazelcast::client::pipelining< E >::results ( )
inline

Returns the results.

The results are returned in the order the requests were done.

This call waits till all requests have completed.

Returns
the List of results.
Exceptions
IExceptionif something fails getting the results.

Definition at line 105 of file pipelining.h.

105  {
106  std::vector<boost::optional<E>> result;
107  result.reserve(futures_.size());
108  auto result_futures = when_all(futures_.begin(), futures_.end());
109  for (auto &f : result_futures.get()) {
110  result.emplace_back(f.get());
111  }
112  return result;
113  }

The documentation for this class was generated from the following file: