Facebook C++ Conference

I managed to snatch tickets for the Facebook C++ Conference. I will be live blogging the event all day. Hope you enjoy!



Mainstream hardware is becoming permanently parallel, heterogeneous and distributed.
– Moore’s Mine is ending, but we know the answer: Open a new mine.
– As usual, the end of one wave overlaps the beginning of the next.
The long-term free lunch app requires lots of juicy latent parallelism expressed in a form that can be spread across a machine with a variable number of cores of different kinds – big/small/specialized cores, and local and distributed cores.
– The filet mignon of throughput gains is still on the menu, but now it costs extra effort – development, complexity, and testing.


Hardware is in motion! This talk is tougher to describe, as the presentation relies on many graphics.


We are observing the end of Moore’s Law. We can no longer rely on single-threaded performance increases, we need to go beyond multi-core to cloud-core and heterogeneous cores.


– Multicore (2005-) e.g. iPad 2, Kindle Fire, etc..
— Initial transition now complete in all mainstream form factors
– Heterogeneous cores (2009-) e.g. compute-capable GPUs.
— 100x and 1000x parallelism already available on home machines for apps that can harness the GPU.
– Elastic compute cloud cores (HaaS, 2010-) e.g. AWS, Azure
— Summer 2011


We are entering a Jungle! 2011-201x Put a heterogeneous supercomputer cluster on every desk, in every home and in every pocket.


Welcome to the Jungle by Herb Sutter


Short break before the next talk by Herb Sutter.


Performance and Efficiency

Critical for Technological Advancement
enables Fundamentally New Products and Experiences
Incremental Steps Add Up to Game-Changing Optimizations



Atomic Operations are Slow
– Remove Ordering Constraints if Possible
Unique Set of Trade Offs
– Not Right For All Applications


Reference Implementation

– Fragment Data to Reduce Contention
– Use Spin Locks Since Contention is Low



– Simple
– Isolate Important Components


Projecting Hash Values

Hybrid Solution:
– Make Array Optimal Size
– Mask by Next Power of 2
– If Outside the Array, Use Modulus


Projecting Hash Values

Modulus is Slow
– Integer Division
Masking With Power of 2 Is Wasteful
– nextPowTwo(1000 Entries/ 80% load factor = 1250)



Increments Cached Per Thread
– Periodically Flushed Atomically
– Traverse All Threads for Exact Value

New ThreadLocal Class
– Fast
– Provides Iterator Across All Threads



Store Arrays of Pointers with __thread
Unique Index Per ThreadLocal Instance
User Mutex on ctor/dtor/iterate
– One per “Tag”
About 4x Faster than boost::thread_specific_ptr


Tracking Size

Obvious Solution : Atomic Increment
– But wait! 50M inc/sec < 100 insert/sec – Atomic Increment is a Bottleneck Try again: ThreadCachedInt Sharded Atomic Increment – Requires a Lot of Shards Thread Local Storage – Better with New ThreadLocal Class


Remaining Issues

How to Track Size?
Modulus is Slow



1. Do a Find
2. Miss -> Failed Erase
3. Found ->compxchg to Empty Sentinel Key

Empty -> Locked -> Valid -> Erased



1. Fill to Max Load Factor
– Can’t Rehash

2. Create Another SubMap

3. Re-Project and Continue Probing


Atomic Probing Algorithm

Always in a Consistent State
– find() has No Memory Barriers

insert() Uses Keys for Locking
– Very Low Contention
– No Additional Storage


Probing Algorithm
– Project Hash Onto Array
– Advance Until Equal or Empty
– Wrap Around


New Approach: AtomicHashMap

Result Preview
– Blistering Speed (1.3x-5x)
– Memory Efficient + Compact References
– std::unordered_map Interface

– Only Integer Keys
– Sensitive to Initialization Size
– Erased Entities Never Reclaimed


Graph Storage and Access
Possible Approaches

Layered Maps
Intel tbb:concurrent_hash_map
Buil Something New


Graph Storage and Access

– CPU Bottleneck
– Memory Sensitivity
– Many Cores Accessing the Same Data
– Updated in Real-Time


What’s it look like?

Huge trove of data, many users have 1M 2nd degree connections
Data and Access Highly Dynamic, live updates, new products
Bottom line: we need performance


Facebook: The Graph?

What’s it Good For?
News feed, what’s going on in the action graph.
Search, find people and things
Ads, Groups, etc.


What should we do about performance? Challenge assumptions, hardware tradeoffs are changing, talk and share pre.


At facebook: incremental changes add up, speed is critical for virality, help the environment -> performance is very important.


Massively Parallel Hashmaps: Spencer Ahrens

Why do we care about hashmap performance?


Short break between talks, will be back!


folly: https://github.com/facebook/folly


Folly components:

1. Containers
2. Parsing and formatting
3. Multithreading helpers
4. General utilities
5. More to come


fbstring will be open-sourced!

folly: Facebook’s Open-Source Llibrary


Andrei emphasizes that you should learn math, cs people really need to know math.


fbstring speedup – 2.53 times faster with ILP and loop unrolling


modulo subtraction is injective


ILP speedup to atoul gives a 193% speedup over atoul.


associative means parallelizable


Minimizing data dependencies

“5235” -> 5235! string to integral (ids are numbers stored as strings)

string atoui -> loops each char and converts to the appropriate digit… see atoui implementations

fbstring actually takes advantage of loop unrolling when converting strings to ints.


Better instruction level parallelism = fewer data dependencies


Instruction Level Parallelism

1. Pipelining
2. Superscalar execuion
3. OoO execution
4. Register renaming
5. Speculative execution


The Forgotten Parallelism

People forget about CPU parallelism which could be taken advantage of. Each core in facebook server, nehalem cpu, has 3 ALUs which are forgotten. You can do three simultaneous arithmetic calculations in a single thread! (SandyBridge has more than 3 ALUs). How do we use them?


Some Benchmarks

push_back speedup vs std::string is huge compared to the standard implementation. 400% speedup for 1 push_back call, 500% for 2 push back calls, less than 200% for 256k push_back calls. Plateau is 1.4 times faster than standard implementation.


The French Connection: jemalloc

Jason Evans wrote a great allocator for FreeBSD and it’s used everywhere at facebook.

1. Highly concurrent
2. Cache-friendly
3. Fast

Extended allocator interface
1. Allocate in allocator-sized slabs
2. In-place expansion where possible

fbstring solves fragmentation by knowing what jemalloc wants, the string will use what jemallocs wants to allocate its internal representation.

jemalloc allows reallocation in place – e.g. a large string built in a loop will be improved and occasional quadratic algorithms become linear! woot!


23 >> 15 (23 is the max chars in a fbstring, 15 is what other libraries use for string)

23, 15 is not a big deal… actually it is because at facebook they have many protocols that store ints as strings, many ints are smalls but not all! User ids at facebook are randomly generated, 64 bits. Almost all these ids don’t fit in 15 chars… thus will spill into allocated memory but with fbstring none will spill and thus be highly optimized.


fbstring has control. fbstring lets them take full control over where data goes, optimizes memory allocation. The performance impact is huge.

Andrei is making fun of Python/Java programmers for not knowing what happens in memory.


fbstring data layout

hello, world! slash0 | 9 (9 is the number of bytes unused, allows them to append quickly and the last byte is used as a null terminator when there are no more bytes unused)

char* data | size_t length | size_t capacity | FF
(if FF is all ones then we will consider the string in this format otherwise in the small format.

atomic refCount | hello slash0


fbstring’s 3-tiered layout

1. In-place store for 0..23 chars
2. Eagerly copied values 24..255 chars
3. Copy-on-write 256 chars and above


We are live again! Andrei is speaking, the title of the talk is Sheer Folly.

C++, why do we care? Why do we still care about system languages.
1. Data Layout Control – How data sits in memory? C++ allows you to literally tell where ever byte goes. Example: fbstring. Hang on, let’s not invent the wheel here but we did… demotivation for std string : 1 – std::string as a container is fail – 2 – std::allocator is fail – 3 – std::char_traits is fail – 4 – algorithm integration is fail – 5 – copy-on-write became fail… motivation: 1 – because its there which lets us take existing pre and link against fbstring

2. Cooperation with the allocator

3. Copy on write optimization


Lunch Break! Following lunch, Andrei Alexandrescu will be making an announcement of interest to C++ programmers around the world. Stay tuned.


Slides from the talk will become available available after the event.



– Perfect forwarding fails for some kinds of arguments
– To “specialize” forwarding templates, use helper class templates or std:enable_if
– templates + pImple => explicitly force instantiations.

More information on comp.std.c++ discussion initiated 16 January 2010


VC11 compiler rejects the explicit instantiation. MS has confirmed two compiler bugs but a workaround exists.

The specialization of the size of the const char, we can fix that but it is left as homework to the attendee.


All you need to do is take a look at the g++ error message, copy it into a source file and this will explicitly instantiate the template but you have to double check the type deduced by the compiler because it may not be the same as the type passed into the function. But g++ is so self aware that it will actually compile/link/run. 😀


Explicit Instantiation

The templates again:

template&lt;typename T&gt; Widget::Widget(T&amp;&amp; param) // forwarding ctor

template&lt;typename T&gt; void Widget::setName(T&amp;&amp; param) // forwarding setting

The uses again:

Widget w(&quot;This is a test&quot;)

w.setName(std::string(&quot;This is another test&quot;));


Instantionals we need:

Widget ctor with T= const char(&)(15)
setName with T= std::sring and also T = std::string&


Forwarding and the pImpl Idiom. Pimplemization is straight forward and compiles without fuss on client pre, but the linker disagrees and can’t link! Ha! The usual template instantiation problem: compilers need template source for instantiation. Pimple precludes that.
Possible solutions:
– abandon pImpl, i.e. admit defeat (!)
– use different build chain -> some use link-time instantiation
– perform explicit instantiation


This is the first adventure in perfect forwarding, how do you deal with specializing for different types without messing with the universal binding, syntax T&&, forced on you.


Specializing for shared_ptr does not specialize for const shared_ptr. To catch both we can add another partial specializing.


Random comment from the crowd: I love being able to write >> in the new C++11 without the space. Scott Meyer is envious that someone can be so happy about that.


We definitely want to have a specialization for shared_ptrs.


Specializing for pointers unrealistic – rarely useful to distinguish pointer lvalues and rvalues.

Specializing for UDTS very realistic.
– Often useful to distinguish lvalues and rvales
– std::shared_ptr is such a UDT. Moving cheaper than copying.


Dispatching through a Class Template … enables use of class template partial specialization allows us to avoid the enable_if problem.


Too much pre on the slide but there are several drawbacks to enable_if, especially when we start using expressions such as const char*


One solution: overload via std::enable_if

Below pre is generalization…

template&lt;typename T&gt;
typename std::enable_if&lt;
typename std::remove_reference&lt;T&gt;::type
fwd(T&amp;&amp; param)
template&lt;typename T&gt;
typename std::enable_if&lt;
typename std::remove_reference&lt;T&gt;::type
fwd(T&amp;&amp; param)


Function templates can’t be specialized, only overloaded. But the form of forwarding templates is constrained, the only parameter type that works is T&&!. Works = handles all combos of const/non-consts expressions.


“Specializng” forward templates

Forwarding to f is easy

template&lt;typename T&gt; void fwd(T&amp;&amp; param){

but what if we want to specialize our template for pointers?


Several kinds of arguments can’t be perfect-forwarded.
* 0 as null pointer constant
* braced initializer lists
* integral consts…


std::make_shared is a function that will return a shared pointer and will typically more efficient than creating your object and than wrapping it in a shared pointer.


std::string s(&quot;PF example&quot;);
auto pw = std::make_shared&lt;Widget&gt;(s, std:vector&lt;int&gt;({1,2,3}));
template&lt;typename T, typename.. ParamTs&gt; shared_ptr&lt;T&gt; make_shared...

// in g++

this will go through several layers, and thus perfect forwarding becomes extremely desirable.


Perfect forwarding is our main interest. Perfect forwarding allows you to pass params straight through templates, we have a common use case, when we build constructors in templated classes. This is when we have problems with the new move construct… perfect forwarding is difficult because we will have to be careful with rvalues and rvalue references.


In order to support moves in C++11, we have a new construct called rvalue references. New syntax type&& will give you rvalue references. Rvalue references identify candidates for moving versus copying.

Must be careful, in function templates type&& are not really rvalue references. They mean “bind to anything” -> universal references.


template&lt;typename T&gt; void f(T&amp;&amp; param);
int x;
f(x); //lvalue: T is int&amp;, instantiates f(int&amp;)
f(factory()); // rvalue: T is std::shared_ptr&lt;Widget&gt;, instantiates f(std::shared_ptr&lt;Widget&gt;&amp;&amp;)


Scott will be speaking about C++11. Lvalues are generally expressions that take an address and they are important for taking copies, copy requests. In C++11, copy requests can often become move requests, the langauge is now taking more care into the difference between l and r values.


C++ is seeing a new growth because we can’t get faster processors very much anymore. Scott will now give his talk.


Opening remarks starting, Andrei Alexandrescu is our MC, hosting the first #fb #cpp #conf. There is a great demand and knowledge for C++ and facebook is very excited about this new growth in interest.


Slides are coming up: Adventures in Perfect Forwarding by Scott Meyers. Opening remarks are expected to be on time. #fb #cpp #conf


Blinds are going down here at the fb cpp conf. Stage is being prepared. 30 minutes to go before the opening remarks. Scott Meyer’s talk will follow.


48 minutes to go before opening remarks, the conference hall is slowly filling up.


The Facebook conference registration will be starting at 11. The’ve got sweet nametags.

[&amp;facebook]() {
return std::move(fast) &amp; break_things();

22 thoughts on “Facebook C++ Conference

  1. Thanks vm for the live comments.
    Someone did post it onto Y Hacker News, but, the hipsters there only seem to comment on Python, Lisp, and 1001 varieties of the same JS framework, and seem to ignore C++. Lest they try to understand it I suppose!

Leave a Reply

Your email address will not be published.