asciidoc 药物再利用

readme.adoc
= Drug repurposing by hetnet relationship prediction: a new hope
:author: Daniel Himmelstein
:twitter: @dhimmel
:thumbnail: https://raw.githubusercontent.com/dhimmel/rephetio/55b03ca1328a01299766cf002c947f2dc85ba5c6/figure/network-v1.0-unlabeled-thumbnail.png
:neo4j-version: 2.3.2
:style: #FF1F17/#FF1F17/black:Drug(name), #965117/#965117/black:Disease(name), #FCC940/#FCC940/black:SideEffect(name), #0098F8/#0098F8/black:Gene(name)

'''

A long time ago in a galaxy far, far away....

It is a dark time for drug discovery. The Empire spends over a billion dollars in R&D per new drug approval. The process takes decades, 9 out of 10 attempts fail, and the cost has been doubling every 9 years since 1970.

But, a small band of Rebel scientists pursue an alternative. Using public data and open source software, the Rebels are predicting new uses for existing drugs. Repurposing drugs avoids the main costs of drug development and is much faster since the drugs are already available and known to be safe.

The Rebels integrated data from every corner of the galaxy. Their hetnet contains 50 thousand nodes of 10 labels and 3 million relationships of 26 types. The Force allows a Data Jedi to predict which drugs treat which diseases. However to learn the Force—also known as a machine learning classifier—the Rebels need to summarize the network connectivity between each drug and disease. Join them in using neo4j to extract the features needed to learn the Force.

== A subnetwork of the Rebel hetnet
'''

Since the complete Rebel hetnet consists of 3 million relationships, it takes the entire Alliance Fleet to store. However, we've constructed a small illustrative subnetwork made to fit inside a single GraphGist starship. The left-hand image below shows the data model for the subnetwork, which contains four node labels and 6 relationship types. On the right, the entire Rebel hetnet is visualized from hyperspace: nodes are laid out orbitally by label and relationships are colored by type. The labels that are omitted in the subnetwork are in gray. We include this image to show the full progress of the Rebellion.

image::https://raw.githubusercontent.com/dhimmel/rephetio/55b03ca1328a01299766cf002c947f2dc85ba5c6/figure/graphgist.png[GraphGist data model and entire Rebel hetnet visualization]

link:http://neo4j.com/developer/cypher-query-language/[Cypher] is the query language of the neo4j database. The following Cypher query creates the subnetwork for this GraphGist. It's hidden by default, but you can click the expand arrows to see it.

//hide
[source,cypher]
----
CREATE

 // create drugs
 (clonidine:Drug {name: 'Clonidine'}),
 (dipivefrin:Drug {name: 'Dipivefrin'}),
 (pilocarpine:Drug {name: 'Pilocarpine'}),

 // create diseases
 (glaucoma:Disease {name: 'glaucoma'}),
 (hypertension:Disease {name: 'hypertension'}),

 // create genes
 (ADRA2A:Gene {name: 'ADRA2A'}),
 (ENG:Gene {name: 'ENG'}),
 (MTHFR:Gene {name: 'MTHFR'}),
 (OPTN:Gene {name: 'OPTN'}),
 (TGFBR2:Gene {name: 'TGFBR2'}),
 (TNIP1:Gene {name: 'TNIP1'}),

 // create side effects
 (cardiac_arrhythmia:SideEffect {name: 'Cardiac Arrhythmia'}),
 (hypersensitivity:SideEffect {name: 'Hypersensitivity'}),
 (stinging:SideEffect {name: 'Stinging Sensation'}),
 (body_odor:SideEffect {name: 'Body odor'}),

 // create treatments
 (dipivefrin)-[:TREATS]->(glaucoma),
 (clonidine)-[:TREATS]->(hypertension),
 (pilocarpine)-[:TREATS]->(glaucoma),

 // create gene-disease associations
 (ADRA2A)-[:ASSOCIATES]->(hypertension),
 (MTHFR)-[:ASSOCIATES]->(hypertension),
 (ENG)-[:ASSOCIATES]->(hypertension),
 (OPTN)-[:ASSOCIATES]->(glaucoma),

 // create drug side effects
 (clonidine)-[:CAUSES]->(cardiac_arrhythmia),
 (dipivefrin)-[:CAUSES]->(cardiac_arrhythmia),
 (pilocarpine)-[:CAUSES]->(cardiac_arrhythmia),
 (pilocarpine)-[:CAUSES]->(body_odor),
 (pilocarpine)-[:CAUSES]->(stinging),
 (dipivefrin)-[:CAUSES]->(stinging),
 (clonidine)-[:CAUSES]->(hypersensitivity),

 // create drug target relationships
 (clonidine)-[:TARGETS]->(ADRA2A),

 // create drug-gene regulations
 (dipivefrin)-[:REGULATES]->(TNIP1),
 (clonidine)-[:REGULATES]->(TGFBR2),

 // create physical interactions
 (OPTN)-[:INTERACTS]->(TNIP1),
 (ENG)-[:INTERACTS]->(TGFBR2)
----

The subnetwork used in this GraphGist is shown below. Use your lightsaber to reposition the nodes for a better view.

//graph

The example network contains 3 `TREATS` relationships. Between the 3 drugs and 2 diseases, there are six possible treatments (drug–disease pairs). The goal is to identify network patterns that distinguish the 3 present from 3 missing `TREATS` relationships.

Specifically, the Data Jedi Youngling searches for types of paths that occur more frequently between treatments than non-treatments. Here, we'll investigate three path types (metapaths):

* `(:Drug)-[:TARGETS]-(:Gene)-[:ASSOCIATES]-(:Disease)`
* `(:Drug)-[:REGULATES]-(:Gene)-[:INTERACTS]-(:Gene)-[:ASSOCIATES]-(:Disease)`
* `(:Drug)-[:CAUSES]-(:SideEffect)-[:CAUSES]-(:Drug)-[:TREATS]-(:Disease)`

Will these path types be sufficient to use the Force?

== Drug targets and disease-associated genes
'''

Both drugs and diseases relate with genes. Drugs target genes by binding to the encoded proteins. Diseases associate with genes when a gene plays a role in or determines susceptibility to a disease. One approach to drug discovery is identifying drugs which target genes associated with a disease. This concept is expressed by the `(:Drug)-[:TARGETS]-(:Gene)-[:ASSOCIATES]-(:Disease)` metapath. The following query counts the number of paths of this type for each drug–disease pair:

[source,cypher]
----
// Find all drug-disease pairs
MATCH (n0:Drug), (n2:Disease)
// Extract paths where the drug targets a gene associated with the disease
OPTIONAL MATCH paths = (n0:Drug)-[:TARGETS]-(n1:Gene)-[:ASSOCIATES]-(n2:Disease)
RETURN
  // Retrieve the name of the drug and disease
  n0.name AS drug,
  n2.name AS disease,
  // Retrieve whether the drug treats the disease
  size((n0)-[:TREATS]-(n2)) AS treatment,
  // Count the number of paths between the drug and disease
  count(paths) AS path_count
// Sort the rows
ORDER BY path_count DESC, treatment DESC
----

//table

The query finds one path and it's between Clonidine and hypertension. Clonidine happens to treat hypertension suggesting that identifying drugs which target associated genes is a good repurposing strategy. However, the applicability of this approach is low: the other two known treatments have a path count of zero. Therefore, the Padawan must look to other path types with better coverage.

== Gene regulation and interactions
'''

Verifying drug targets requires time-consuming experiments that aren't yet fully automatable. Therefore, this relationship type is highly incomplete--a common phenomenon in biological networks. However, recent high-throughput technologies have been able to more comprehensively relate drugs to genes. A recent project called LINCS profiled thousands of drugs and measured which genes change in abundance after cells are exposed to each drug. A drug is said to regulate a gene if the drug either increases or decreases the number of transcripts corresponding to that gene.

Another method for increasing the coverage of a path type is to increase its length. When proteins encoded by two genes form physical bonds inside a cell, the genes are said to interact. Genes tend to interact with other genes that perform similar functions, so adding an `INTERACTS` relationship to a metapath shifts the focus from a single gene to a neighborhood of functionally related genes.

Tying these sources together is the `(:Drug)-[:REGULATES]-(:Gene)-[:INTERACTS]-(:Gene)-[:ASSOCIATES]-(:Disease)` metatpath. Starting with a disease, the involved genes are detected by looking for genes that interact with associated genes. Then drugs are identified which regulate these genes. The goal is to find drugs which interfere with a gene neighborhood implicated in a disease.

[source,cypher]
----
// Find all drug-disease pairs
MATCH (n0:Drug), (n3:Disease)
// Extract paths following the specified metapath
OPTIONAL MATCH paths = (n0:Drug)-[:REGULATES]-(n1:Gene)-[:INTERACTS]-(n2:Gene)-[:ASSOCIATES]-(n3:Disease)
WITH
  // reidentify the source and target nodes
  n0 AS source, n3 AS target, paths,
  // Extract the degrees along each path
  [
    size((n0)-[:REGULATES]-()),
    size(()-[:REGULATES]-(n1)),
    size((n1)-[:INTERACTS]-()),
    size(()-[:INTERACTS]-(n2)),
    size((n2)-[:ASSOCIATES]-()),
    size(()-[:ASSOCIATES]-(n3))
  ] AS degrees
RETURN
  // Retrieve the name of the drug and disease
  source.name AS drug,
  target.name AS disease,
  // Retrieve whether the drug treats the disease
  size((source)-[:TREATS]-(target)) AS treatment,
  // Compute the path count
  count(paths) AS path_count,
  // Compute the degree-weighted path count with w = 0.5
  sum(reduce(pdp = 1.0, d in degrees| pdp * d ^ -0.5)) AS DWPC
// Sort the rows
ORDER BY DWPC DESC
----

//table

We now have two drug–disease pairs with at least one path. Since they're both treatments, this feature appears predictive.

In the above query, we also calculate the degree-weighted path count (_DWPC_) for each drug–disease pair. The _DWPC_ is a modification to the path count, which downweights paths through highly connected nodes. By rewarding highly specific relationships, which tend to be more informative, degree weighting can improve predictiveness. A single parameter, set here to `0.5`, controls the strength of the weighting. For the best Jedi training, try learning the _DWPC_ algorithm from its Cypher implementation. If that fails, see panel D of link:https://doi.org/10.1371/journal.pcbi.1004259.g002[this diagram].

== Side effects
'''

FDA-approved drugs are required to list known side effects. Rebel researchers used text mining to catalog the side effects for all approved drugs, which we include in our hetnet. Side effects paint a high-level picture of a drug's mechanism, regardless of whether the underlying molecular targets are known. One hypothesis is that drugs with similar side effects are likely to treat the same diseases. The `(:Drug)-[:CAUSES]-(:SideEffect)-[:CAUSES]-(:Drug)-[:TREATS]-(:Disease)` metapath looks for drugs that share side effects with a drug known to treat a disease.

//hide
[source,cypher]
----
// Find all drug-disease pairs
MATCH (n0:Drug), (n3:Disease)
// Extract paths following the specified metapath
// Omit node labels for efficiency
OPTIONAL MATCH paths = (n0)-[:CAUSES]-(n1)-[:CAUSES]-(n2)-[:TREATS]-(n3)
// Specify the join index to reach lightspeed
USING JOIN ON n1
// Exclude paths with duplicate nodes
WHERE n0 <> n2
WITH
  // reidentify the source and target nodes
  n0 AS source, n3 AS target, paths,
  // Extract the degrees along each path
  [
    size((n0)-[:CAUSES]-()),
    size(()-[:CAUSES]-(n1)),
    size((n1)-[:CAUSES]-()),
    size(()-[:CAUSES]-(n2)),
    size((n2)-[:TREATS]-()),
    size(()-[:TREATS]-(n3))
  ] AS degrees
RETURN
  source.name AS drug,
  target.name AS disease,
  // Retrieve whether the drug treats the disease
  size((source)-[:TREATS]-(target)) AS treatment,
  // Compute the path count
  count(paths) AS PC,
  // Compute the degree-weighted path count with w = 0.5
  sum(reduce(pdp = 1.0, d in degrees| pdp * d ^ -0.5)) AS DWPC
// Sort the rows
ORDER BY DWPC DESC
----

//table

Since approved drugs have abundant side effects, this feature is more complete than the previous two. All but one drug–disease pair has at least a single path. Several even have two paths. The top two _DWPCs_ correspond to treatments, suggesting side effects can inform drug repurposing. The third ranked pair, Clonidine and glaucoma, is also a treatment although this knowledge wasn't in our subnetwork. This illustrates the promise of our approach. Currently, many effective treatments are unknown, so the top ranking drug–disease pairs that are not current treatments are the ideal place to look for drug repurposing candidates.

== Epilogue
'''

We've computed features for three different path types in Cypher. In all three cases, paths were more prevalent between treatments than non-treatments. However, any individual path type was insufficient to separate all treatments from non-treatments. Thus the Jedi Knight uses the Force to combine information from many path types into a predictive classifier.

[quote, Jedi Master]
____
Weak alone the features are. Integrate and use the Force must you; the glue to bring diverse datasets together. Predict will you the probability that each drug treats each disease. But beware of the dark side. Yes. Relational databases and secrecy are the path to the dark side. Make open data and use neo4j. Then, only then, a Jedi will you be.
____

If you're interested in this project, visit link:https://doi.org/10.15363/thinklab.4[the *Rebel base*] to learn more.

'''
© 2016, Daniel Himmelstein, released as link:https://creativecommons.org/licenses/by/4.0/[CC-BY]

asciidoc uuid.adoc

uuid.adoc
= UUID partition Linux
:toc:


/////////////////////////////////
asciidoctor live
* https://gist.asciidoctor.org/
* https://www.tutorialspoint.com/online_asciidoc_editor.php
* https://asciidoclive.com

/////////////////////////////////


== Installation des paquets
-----
# apt install util-linux uuid-runtime
-----

== Lister les UUID
Checher l'identfiant :

-----
# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Oct  6 18:37 3725-1C05 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct  6 18:37 fd695ef5-f047-44bd-b159-2a78c53af20a -> ../../sda2
-----


-----
# blkid
/dev/sda1: LABEL="boot" UUID="3725-1C05" TYPE="vfat" PARTUUID="6e7574f9-01"
/dev/sda2: LABEL="rootfs" UUID="fd695ef5-f047-44bd-b159-2a78c53af20a" TYPE="ext4" PARTUUID="6e7574f9-02"
-----

== Générer un UUID

-----
# uuidgen
c2b1a0a9-4e6a-4384-aa60-b72ffc82cdc3
-----

== Changer l'UUID d'une partition
-----
# tune2fs /dev/sda2 -U c2b1a0a9-4e6a-4384-aa60-b72ffc82cdc3
-----

---- OU -----

-----
# tune2fs /dev/sda2 -U `uuidgen`
-----


asciidoc asciidoc测试

asciidoc测试

test.adoc
:chapter-label:
:icons: font
:lang: en
:sectanchors:
:sectlinks:
:sectnums:
:source-highlighter: highlightjs
:toc: left
:toclevels: 2

= Try AsciiDoc

There is _no reason_ to prefer http://daringfireball.net/projects/markdown/[Markdown]:
it has *all the features*
footnote:[See http://asciidoc.org/userguide.html[the user guide].]
and more!

NOTE: Great projects use it, including Git, WeeChat and Pacman!

== Comparison

.Snippets of markup footnote:[More snippets in http://powerman.name/doc/asciidoc[the cheatsheet]]
[cols=",2*<"]
|===
.3+^.^s| Link |AsciiDoc |`http://example.com[Dummy]`
              |Markdown |`[Dummy](http://example.com)`
              |Textile |`"Dummy":http://example.com`

.3+^.^s| Face |AsciiDoc |`Either *bold* or _italic_`
              |Markdown |`Either **bold** or *italic*`
 |Textile  |`Either *bold* or _italic_`

.3+^.^s| Header |AsciiDoc |`== Level 2 ==`
                |Markdown |`## Level 2`
                |Textile  |`h2.  Level 2`
|===  a

== Ruby code to render AsciiDoc

[source,ruby]
----
require 'asciidoctor'  # <1>

puts Asciidoctor.render_file('sample.adoc', :header_footer => true)  # <2>
----
<1> Imports the library
<2> Reads, parses and renders the file


And here is some silly math:
e^πi^ + 1 = 0 and H~2~O.

asciidoc sample9.adoc

sample9.adoc





































[#myid]

asciidoc gistfile1.adoc

gistfile1.adoc
ifdef::env-github[] 
:tip-caption: :bulb: 
:note-caption: :information_source: 
:important-caption: :heavy_exclamation_mark: 
:caution-caption: :fire: 
:warning-caption: :warning: 
endif::[]


= Types, Values, and Variables
:idprefix: 
:idseparator: - 
:sectanchors: 
:sectlinks: 
:sectnumlevels: 6
:sectnums: 
:toc: macro 
:toclevels: 6 
:toc-title: 

toc::[] 





== Java Language Types 

=== Statically typed
The Java programming language is a statically typed language, which means that every variable and every expression has a type that is known at compile time.


=== Strongly Typed
The Java programming language is also a strongly typed language, because types limit the values that a variable (§4.12) can hold or that an expression can produce, limit the operations supported on those values, and determine the meaning of the operations. Strong static typing helps detect errors at compile time.




== Primitive Types and Reference Types

=== Primitive Types
The primitive types (§4.2) are the boolean type and the numeric types. The numeric types are the integral types byte, short, int, long, and char, and the floating-point types float and double.

IMPORTANT : A primitive type is predefined by the Java programming language and named by its reserved keyword 

IMPORTANT : Primitives are stored on the stack.For example following java primitive type.When compiler compile following code  it'll create bytecode instructions that
leave 4 bytes of room on the stack. 

[source,java]

----


int a = 5;


---- 





==== Numeric Types
The numeric types are the integral types and the floating-point types.

==== Integral Types
The integral types are byte, short, int, and long, whose values are 8-bit, 16-bit, 32-bit and 64-bit signed two's-complement integers, respectively, and char, whose values are 16-bit unsigned integers representing UTF-16 code units



==== Floating-point Types
The floating-point types are float, whose values include the 32-bit IEEE 754 floating-point numbers, and double, whose values include the 64-bit IEEE 754 floating-point numbers.

IMPORTANT : Java has two primitive types for floating-point numbers:
float: Uses 4 bytes
double: Uses 8 bytes 

CAUTION: float is not as accurate (as it has less precision, because its smaller) 

double d= 0.0;
float f=9.0f;


==== Boolean Type
The boolean type has exactly two values: true and false.(0,1)

IMPORTANT : The boolean type represents a logical quantity with two possible values, indicated by the literals true and false 


===== Bitwise Operators

Java's bitwise operators operate on individual bits of integer (int and long) values.
IMPORTANT : It helps to know how integers are represented in binary. 

There is a two common way to store integer values into binary values.
TIP : Before we switch to the bitwise operators we see how this storing process works.

====== Storing integer Values Into Binary Form

======= Two Complement Form

This storing ule says that:
-for zero, use all 0's.
-for positive integers, start counting up, with a maximum of 2(number of bits - 1)-1.
-for negative integers, do exactly the same thing, but switch the role of 0's and 1's 
(so instead of starting with 0000, start with 1111 - that's the "complement" part).

For example we can store numbers of (-7)-(+7) into two complement form.

TIP : First digit represent negative or positive other digits represent size of integer.

2^3-1 = 7 
0000 - zero

0001 - one

0010 - two

0011 - three

0100 to 0111 - four to seven


======= Unsigned Form
First consider an unsigned integer stored in 4 bits. You can have the following



0000 = 0

0001 = 1

0010 = 2

...

1111 = 15


CAUTION: These are unsigned because there is no indication of whether they are negative or positive. 



[#id]


====== Relational Operators

The  == and != (§15.21.2)



====== Logical Complement Operator 

The  ! (§15.15.6)


====== Logical Operators

The logical operators &, ^, and | (§15.22.2)


====== Conditional-and And Conditional-or Operators

The conditional-and and conditional-or operators && (§15.23) and || (§15.24)


====== Conditional Operator

The conditional operator ? : (§15.25)

The string concatenation operator + (§15.18.1), which, when given a String operand and a boolean operand, will convert the boolean operand to a String (either "true" or "false"), and then produce a newly created String that is the concatenation of the two strings


===== Logical Complement Operator



=== Reference Types
The reference types (§4.3) are class types, interface types, and array types. There is also a special null type. An object (§4.3.1) is a dynamically created instance of a class type or a dynamically created array. The values of a reference type are references to objects. All objects, including arrays, support the methods of class Object (§4.3.2). String literals are represented by String objects 


==== Implicitly and Explicitly Declaration

===== Implicitly Declared Reference Types

TIP: Explicit means done by the programmer.

[source,java]
----
        /* An array is implicitly created 

           by an array constructor: */

        Point a[] = ;



        /* Strings are implicitly created 

           by  operators: */

        System.out.println("p: "  p);

        System.out.println("a: ");
---- 


===== Explicitly Declared Reference Types

TIP: Implicit means done by the JVM or the tool 

[source,java]
----
        /* An array is explicitly created

           by an array creation expression: */

        String sa[] = new String[2];

        sa[0] = "he"; sa[1] = "llo";

        System.out.println(sa[0]  sa[1]);

    
---- 

[source,java]
----
        /* A Point is explicitly created

           using newInstance: */

        Point p = null;

        try  catch (Exception e) 
---- 



==== Operators On References To Objects

===== Field Access


===== Method Invocation


===== The Cast Operator 


===== The Instanceof Operator 


===== The Reference Equality Operators == And =


===== The Conditional Operator ? 



==== The Class Object

=====getClass()
The class Object is a superclass (§8.1.4) of all other classes.

All class and array types inherit (§8.4.8) the methods of class Object

IMPORTANT : Every Class Object is inherited by all other classes that  defines the its own getClass() method.



==== The String Class

TIP: String literals (§3.10.5) are references to instances of class String.

CAUTION: For example the string concatenation operator + (§15.18.1) implicitly creates a new String object when the result is not 
a constant expression (§15.28).


[source,java]
----
      String a="hello"+"world";  
      String b="hello";
      String c="world";
      String d=b+c;
      System.out.println(a==d);
---- 

CAUTION: Output : False




=== Type Variables

<<fake/../../rep2/doc2.adoc#id,document 2>>


++++
<p id="cw3">
ss
</p>
++++






asciidoc AsciiDOc备忘单

AsciiDOc备忘单

gistfile1.adoc

= Asciidoc cheatsheet for GitHub
:idprefix:
:idseparator: -
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc: macro
:toclevels: 6
:toc-title:


toc::[]

== Attributes

== Easy Horizontal Left-Rİght Table
....
++++
<table class=cheatsheet> 
++++

++++
<tr><td class=cheatsheet-source>
++++
Left
++++
</td><td class=cheatsheet-render>
++++
Right
++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
</table>
++++
....









== Easy TOC
....
= EASYTOC
:idprefix:
:idseparator: -
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc: macro
:toclevels: 6
:toc-title:

toc::[]

== Getting started
=== Getting started 
....

++++
</td><td class=cheatsheet-render>
++++


=== EASYTOC
:idprefix:
:idseparator: -
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc: macro
:toclevels: 6
:toc-title:

toc::[]

== Getting started
=== Getting started 

++++
</td></tr><tr><td></td><td></td></tr>
++++





++++
<tr><td class=cheatsheet-source>
++++

....
Author is "{author}" with email <{email}>,
some attribute's value is {someattribute}.

// Line with unknown attribute must be
// removed, but it's not.
Line with attribute like {nosuchattribute}.

Escaped: \{author} and +++{author}+++
....

++++
</td><td class=cheatsheet-render>
++++

Author is "{author}" with email <{email}>,
some attribute's value is {someattribute}.

// Line with unknown attribute must be
// removed, but it's not.
Line with attribute like {nosuchattribute}.

Escaped: \{author} and +++{author}+++


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
////
TIP: You can use attributes to setup
     auto-generated table of contents, for ex.:
:toc:
:toclevels: 3

Then put this line where you want to
insert table of contents:
toc::[]

In GitHub Wiki table of contents looks worse
than in file, especially multi-level one,
because it include list bullets.
////
....

++++
</td><td class=cheatsheet-render>
++++

////
TIP: You can use attributes to setup
     auto-generated table of contents, for ex.:
:toc:
:toclevels: 3

Then put this line where you want to
insert table of contents:
toc::[]

In GitHub Wiki table of contents looks worse
than in file, especially multi-level one,
because it include list bullets.
////



++++
</td></tr>
++++


++++
</table>
++++

== Headers

++++
<table class=cheatsheet>
++++


++++
<tr><td class=cheatsheet-source>
++++

....
== Level 1
Text.

=== Level 2
Text.

==== Level 3
Text.

===== Level 4
Text.

....

++++
</td><td class=cheatsheet-render>
++++

== Level 1
Text.

=== Level 2
Text.

==== Level 3
Text.

===== Level 4
Text.


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Level 1
-------
Text.

Level 2
~~~~~~~
Text.

Level 3
^^^^^^^
Text.

Level 4
+++++++
Text.

....

++++
</td><td class=cheatsheet-render>
++++

Level 1
-------
Text.

Level 2
~~~~~~~
Text.

Level 3
^^^^^^^
Text.

Level 4
+++++++
Text.



++++
</td></tr>
++++


++++
</table>
++++

== Paragraphs

++++
<table class=cheatsheet>
++++


++++
<tr><td class=cheatsheet-source>
++++

....
// Paragraph title is not highlighted.
.Optional Title

Usual
paragraph.

Second paragraph.

....

++++
</td><td class=cheatsheet-render>
++++

// Paragraph title is not highlighted.
.Optional Title

Usual
paragraph.

Second paragraph.


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title

 Literal paragraph.
  Must be indented.

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title

 Literal paragraph.
  Must be indented.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title

[source,perl]
die 'connect: '.$dbh->errstr;

Not a code in next paragraph.

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title

[source,perl]
die 'connect: '.$dbh->errstr;

Not a code in next paragraph.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Type of block (NOTE/TIP/…) should be
// shown as an icon, not as text.
.Optional Title
NOTE: This is an example
      single-paragraph note.

....

++++
</td><td class=cheatsheet-render>
++++

// Type of block (NOTE/TIP/…) should be
// shown as an icon, not as text.
.Optional Title
NOTE: This is an example
      single-paragraph note.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[NOTE]
This is an example
single-paragraph note.

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[NOTE]
This is an example
single-paragraph note.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
TIP: Some tip text.

....

++++
</td><td class=cheatsheet-render>
++++

TIP: Some tip text.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
IMPORTANT: Some important text.

....

++++
</td><td class=cheatsheet-render>
++++

IMPORTANT: Some important text.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
WARNING: Some warning text.

....

++++
</td><td class=cheatsheet-render>
++++

WARNING: Some warning text.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
CAUTION: Some caution text.

....

++++
</td><td class=cheatsheet-render>
++++

CAUTION: Some caution text.



++++
</td></tr>
++++


++++
</table>
++++

== Blocks

++++
<table class=cheatsheet>
++++


++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
----
*Listing* Block

Use: code or file listings
----

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
----
*Listing* Block

Use: code or file listings
----



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[source,perl]
----
# *Source* block
# Use: highlight code listings
# (require `source-highlight` or `pygmentize`)
use DBI;
my $dbh = DBI->connect('...',$u,$p)
    or die "connect: $dbh->errstr";
----

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[source,perl]
----
# *Source* block
# Use: highlight code listings
# (require `source-highlight` or `pygmentize`)
use DBI;
my $dbh = DBI->connect('...',$u,$p)
    or die "connect: $dbh->errstr";
----



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Sidebar block isn't highlighted.
.Optional Title
****
*Sidebar* Block

Use: sidebar notes :)
****

....

++++
</td><td class=cheatsheet-render>
++++

// Sidebar block isn't highlighted.
.Optional Title
****
*Sidebar* Block

Use: sidebar notes :)
****



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Example block isn't highlighted.
.Optional Title
==========================
*Example* Block

Use: examples :)
==========================

// Example caption removed, not changed.
[caption="Custom: "]
==========================
Default caption "Example:"
can be changed.
==========================

....

++++
</td><td class=cheatsheet-render>
++++

// Example block isn't highlighted.
.Optional Title
==========================
*Example* Block

Use: examples :)
==========================

// Example caption removed, not changed.
[caption="Custom: "]
==========================
Default caption "Example:"
can be changed.
==========================


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[NOTE]
===============================
*NOTE* Block

Use: multi-paragraph notes.
===============================

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[NOTE]
===============================
*NOTE* Block

Use: multi-paragraph notes.
===============================



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
////
*Comment* block

Use: hide comments
////

....

++++
</td><td class=cheatsheet-render>
++++

////
*Comment* block

Use: hide comments
////



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
++++
*Passthrough* Block
<p>
Use: backend-specific markup like
<table border="1">
<tr><td>1</td><td>2</td></tr>
</table>
++++

....

++++
</td><td class=cheatsheet-render>
++++

++++
*Passthrough* Block
<p>
Use: backend-specific markup like
<table border="1">
<tr><td>1</td><td>2</td></tr>
</table>
++++



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
 .Optional Title
 ....
 *Literal* Block

 Use: workaround when literal
 paragraph (indented) like
   1. First.
   2. Second.
 incorrectly processed as list.
 ....

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title

....
*Literal* Block

Use: workaround when literal
paragraph (indented) like
  1. First.
  2. Second.
incorrectly processed as list.
....

++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[quote, cite author, cite source]
____
*Quote* Block

Use: cite somebody
____

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[quote, cite author, cite source]
____
*Quote* Block

Use: cite somebody
____




++++
</td></tr>
++++


++++
</table>
++++

== Text

++++
<table class=cheatsheet>
++++



++++
<tr><td class=cheatsheet-source>
++++

....
forced +
line break

....

++++
</td><td class=cheatsheet-render>
++++

forced +
line break



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
normal, _italic_, *bold*, +mono+.

``double quoted'', `single quoted'.

normal, ^super^, ~sub~.

....

++++
</td><td class=cheatsheet-render>
++++

normal, _italic_, *bold*, +mono+.

``double quoted'', `single quoted'.

normal, ^super^, ~sub~.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Command: `ls -al`

+mono *bold*+

`passthru *bold*`

....

++++
</td><td class=cheatsheet-render>
++++

Command: `ls -al`

+mono *bold*+

`passthru *bold*`



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Path: '/some/filez.txt', '.b'

....

++++
</td><td class=cheatsheet-render>
++++

Path: '/some/filez.txt', '.b'



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Colors and font size doesn't change.
[red]#red text# [yellow-background]#on yellow#
[big]#large# [red yellow-background big]*all bold*

....

++++
</td><td class=cheatsheet-render>
++++

// Colors and font size doesn't change.
[red]#red text# [yellow-background]#on yellow#
[big]#large# [red yellow-background big]*all bold*



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Chars: n__i__**b**++m++[red]##r##

....

++++
</td><td class=cheatsheet-render>
++++

Chars: n__i__**b**++m++[red]##r##



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Comment

....

++++
</td><td class=cheatsheet-render>
++++

// Comment



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
(C) (R) (TM) -- ... -> <- => <= &#182;

....

++++
</td><td class=cheatsheet-render>
++++

(C) (R) (TM) -- ... -> <- => <= &#182;



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
''''

....

++++
</td><td class=cheatsheet-render>
++++

''''



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Differs from Asciidoc, but it's hard to say who's correct.
Escaped:
\_italic_, +++_italic_+++,
t\__e__st, +++t__e__st+++,
+++<b>bold</b>+++, $$<b>normal</b>$$
\&#182;
\`not single quoted'
\`\`not double quoted''

....

++++
</td><td class=cheatsheet-render>
++++

// Differs from Asciidoc, but it's hard to say who's correct.
Escaped:
\_italic_, +++_italic_+++,
t\__e__st, +++t__e__st+++,
+++<b>bold</b>+++, $$<b>normal</b>$$
\&#182;
\`not single quoted'
\`\`not double quoted''




++++
</td></tr>
++++


++++
</table>
++++

== Macros: links, images & include

++++
<table class=cheatsheet>
++++


If you'll need to use space in url/path you should replace it with %20.


++++
<tr><td class=cheatsheet-source>
++++

....
[[anchor-1]]
Paragraph or block 1.

// This type of anchor doesn't work
anchor:anchor-2[]
Paragraph or block 2.

<<anchor-1>>,
<<anchor-1,First anchor>>,
xref:anchor-2[],
xref:anchor-2[Second anchor].

....

++++
</td><td class=cheatsheet-render>
++++

[[anchor-1]]
Paragraph or block 1.

// This type of anchor doesn't work
anchor:anchor-2[]
Paragraph or block 2.

<<anchor-1>>,
<<anchor-1,First anchor>>,
xref:anchor-2[],
xref:anchor-2[Second anchor].



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Link "root" is root of repo.
link:README.adoc[This document]
link:README.adoc[]
link:/[This site root]

....

++++
</td><td class=cheatsheet-render>
++++

// Link "root" is root of repo.
link:README.adoc[This document]
link:README.adoc[]
link:/[This site root]



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
http://google.com
http://google.com[Google Search]
mailto:root@localhost[email admin]

....

++++
</td><td class=cheatsheet-render>
++++

http://google.com
http://google.com[Google Search]
mailto:root@localhost[email admin]



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
First home
image:images/icons/home.png[]
, second home
image:images/icons/home.png[Alt text]
.

.Block image
image::images/icons/home.png[]
image::images/icons/home.png[Alt text]

.Thumbnail linked to full image
image:/images/font/640-screen2.gif[
"My screenshot",width=128,
link="/images/font/640-screen2.gif"]

....

++++
</td><td class=cheatsheet-render>
++++

First home
image:images/icons/home.png[]
, second home
image:images/icons/home.png[Alt text]
.

.Block image
image::images/icons/home.png[]
image::images/icons/home.png[Alt text]

.Thumbnail linked to full image
image:/images/font/640-screen2.gif[
"My screenshot",width=128,
link="/images/font/640-screen2.gif"]



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// include\:\: is replaced with link:
This is example how files
can be included.

include::footer.txt[]

[source,perl]
----
include::script.pl[]
----

....

++++
</td><td class=cheatsheet-render>
++++

// include\:\: is replaced with link:
This is example how files
can be included.

include::footer.txt[]

[source,perl]
----
include::script.pl[]
----




++++
</td></tr>
++++


++++
</table>
++++

== Lists

++++
<table class=cheatsheet>
++++



++++
<tr><td class=cheatsheet-source>
++++

....
.Bulleted
* bullet
* bullet
  - bullet
  - bullet
* bullet
** bullet
** bullet
*** bullet
*** bullet
**** bullet
**** bullet
***** bullet
***** bullet
**** bullet
*** bullet
** bullet
* bullet

....

++++
</td><td class=cheatsheet-render>
++++

.Bulleted
* bullet
* bullet
  - bullet
  - bullet
* bullet
** bullet
** bullet
*** bullet
*** bullet
**** bullet
**** bullet
***** bullet
***** bullet
**** bullet
*** bullet
** bullet
* bullet



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Bulleted 2
- bullet
  * bullet

....

++++
</td><td class=cheatsheet-render>
++++

.Bulleted 2
- bullet
  * bullet



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Markers differs from Asciidoc.
.Ordered
. number
. number
  .. letter
  .. letter
. number
.. loweralpha
.. loweralpha
... lowerroman
... lowerroman
.... upperalpha
.... upperalpha
..... upperroman
..... upperroman
.... upperalpha
... lowerroman
.. loweralpha
. number

....

++++
</td><td class=cheatsheet-render>
++++

// Markers differs from Asciidoc.
.Ordered
. number
. number
  .. letter
  .. letter
. number
.. loweralpha
.. loweralpha
... lowerroman
... lowerroman
.... upperalpha
.... upperalpha
..... upperroman
..... upperroman
.... upperalpha
... lowerroman
.. loweralpha
. number


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Ordered 2
a. letter
b. letter
   .. letter2
   .. letter2
       .  number
       .  number
           1. number2
           2. number2
           3. number2
           4. number2
       .  number
   .. letter2
c. letter

....

++++
</td><td class=cheatsheet-render>
++++

.Ordered 2
a. letter
b. letter
   .. letter2
   .. letter2
       .  number
       .  number
           1. number2
           2. number2
           3. number2
           4. number2
       .  number
   .. letter2
c. letter



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Labeled
Term 1::
    Definition 1
Term 2::
    Definition 2
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5

....

++++
</td><td class=cheatsheet-render>
++++

.Labeled
Term 1::
    Definition 1
Term 2::
    Definition 2
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Labeled 2
Term 1;;
    Definition 1
    Term 1.1::
        Definition 1.1

....

++++
</td><td class=cheatsheet-render>
++++

.Labeled 2
Term 1;;
    Definition 1
    Term 1.1::
        Definition 1.1



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Horizontal lists looks wrong.
[horizontal]
.Labeled horizontal
Term 1:: Definition 1
Term 2:: Definition 2
[horizontal]
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
[horizontal]
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
[horizontal]
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5

....

++++
</td><td class=cheatsheet-render>
++++

// Horizontal lists looks wrong.
[horizontal]
.Labeled horizontal
Term 1:: Definition 1
Term 2:: Definition 2
[horizontal]
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
[horizontal]
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
[horizontal]
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
[qanda]
.Q&A
Question 1::
    Answer 1
Question 2:: Answer 2

....

++++
</td><td class=cheatsheet-render>
++++

[qanda]
.Q&A
Question 1::
    Answer 1
Question 2:: Answer 2



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Bug: (B) should be same level as (A)
.Indent is optional
- bullet
    * another bullet
        1. number
        .  again number (A)
            a. letter
            .. again letter

.. letter
. number (B)

* bullet
- bullet
....

++++
</td><td class=cheatsheet-render>
++++

// Bug: (B) should be same level as (A)
.Indent is optional
- bullet
    * another bullet
        1. number
        .  again number (A)
            a. letter
            .. again letter

.. letter
. number (B)

* bullet
- bullet

++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Break two lists
. number
. number

Independent paragraph break list.

. number

.Header break list too
. number

--
. List block define list boundary too
. number
. number
--

--
. number
. number
--

....

++++
</td><td class=cheatsheet-render>
++++

.Break two lists
. number
. number

Independent paragraph break list.

. number

.Header break list too
. number

--
. List block define list boundary too
. number
. number
--

--
. number
. number
--



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Continuation
- bullet
continuation
. number
  continuation
* bullet

  literal continuation

.. letter
+
Non-literal continuation.
+
----
any block can be

included in list
----
+
Last continuation.

....

++++
</td><td class=cheatsheet-render>
++++

.Continuation
- bullet
continuation
. number
  continuation
* bullet

  literal continuation

.. letter
+
Non-literal continuation.
+
----
any block can be

included in list
----
+
Last continuation.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.List block allow sublist inclusion
- bullet
  * bullet
+
--
    - bullet
      * bullet
--
  * bullet
- bullet
  . number
    .. letter
+
--
      . number
        .. letter
--
    .. letter
  . number


....

++++
</td><td class=cheatsheet-render>
++++

.List block allow sublist inclusion
- bullet
  * bullet
+
--
    - bullet
      * bullet
--
  * bullet
- bullet
  . number
    .. letter
+
--
      . number
        .. letter
--
    .. letter
  . number





++++
</td></tr>
++++


++++
</table>
++++

== Tables

++++
<table class=cheatsheet>
++++


You can fill table from CSV file using +include::+ macros inside table.


++++
<tr><td class=cheatsheet-source>
++++

....
// Table footer doesn't highlighted.
.An example table
[options="header,footer"]
|=======================
|Col 1|Col 2      |Col 3
|1    |Item 1     |a
|2    |Item 2     |b
|3    |Item 3     |c
|6    |Three items|d
|=======================

....

++++
</td><td class=cheatsheet-render>
++++

// Table footer doesn't highlighted.
.An example table
[options="header,footer"]
|=======================
|Col 1|Col 2      |Col 3
|1    |Item 1     |a
|2    |Item 2     |b
|3    |Item 3     |c
|6    |Three items|d
|=======================



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Table width, frame and grid control doesn't work.
.CSV data, 15% each column
[format="csv",width="60%",cols="4"]
[frame="topbot",grid="none"]
|======
1,2,3,4
a,b,c,d
A,B,C,D
|======


....

++++
</td><td class=cheatsheet-render>
++++

// Table width, frame and grid control doesn't work.
.CSV data, 15% each column
[format="csv",width="60%",cols="4"]
[frame="topbot",grid="none"]
|======
1,2,3,4
a,b,c,d
A,B,C,D
|======




++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Table column align doesn't work.
[grid="rows",format="csv"]
[options="header",cols="^,<,<s,<,>m"]
|===========================
ID,FName,LName,Address,Phone
1,Vasya,Pupkin,London,+123
2,X,Y,"A,B",45678
|===========================

....

++++
</td><td class=cheatsheet-render>
++++

// Table column align doesn't work.
[grid="rows",format="csv"]
[options="header",cols="^,<,<s,<,>m"]
|===========================
ID,FName,LName,Address,Phone
1,Vasya,Pupkin,London,+123
2,X,Y,"A,B",45678
|===========================



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Multiline cells, row/col span
|====
|Date |Duration |Avg HR |Notes

|22-Aug-08 .2+^.^|10:24 | 157 |
Worked out MSHR (max sustainable
heart rate) by going hard
for this interval.

|22-Aug-08 | 152 |
Back-to-back with previous interval.

|24-Aug-08 3+^|none

|====

....

++++
</td><td class=cheatsheet-render>
++++

.Multiline cells, row/col span
|====
|Date |Duration |Avg HR |Notes

|22-Aug-08 .2+^.^|10:24 | 157 |
Worked out MSHR (max sustainable
heart rate) by going hard
for this interval.

|22-Aug-08 | 152 |
Back-to-back with previous interval.

|24-Aug-08 3+^|none

|====



++++
</td></tr>
++++


++++
</table>
++++

asciidoc AsciiDOc备忘单

AsciiDOc备忘单

gistfile1.adoc

= Asciidoc cheatsheet for GitHub
:idprefix:
:idseparator: -
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc: macro
:toclevels: 6
:toc-title:


toc::[]

== Attributes

== Easy Horizontal Left-Rİght Table
....
++++
<table class=cheatsheet> 
++++

++++
<tr><td class=cheatsheet-source>
++++
Left
++++
</td><td class=cheatsheet-render>
++++
Right
++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
</table>
++++
....









== Easy TOC
....
= EASYTOC
:idprefix:
:idseparator: -
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc: macro
:toclevels: 6
:toc-title:

toc::[]

== Getting started
=== Getting started 
....

++++
</td><td class=cheatsheet-render>
++++


=== EASYTOC
:idprefix:
:idseparator: -
:sectanchors:
:sectlinks:
:sectnumlevels: 6
:sectnums:
:toc: macro
:toclevels: 6
:toc-title:

toc::[]

== Getting started
=== Getting started 

++++
</td></tr><tr><td></td><td></td></tr>
++++





++++
<tr><td class=cheatsheet-source>
++++

....
Author is "{author}" with email <{email}>,
some attribute's value is {someattribute}.

// Line with unknown attribute must be
// removed, but it's not.
Line with attribute like {nosuchattribute}.

Escaped: \{author} and +++{author}+++
....

++++
</td><td class=cheatsheet-render>
++++

Author is "{author}" with email <{email}>,
some attribute's value is {someattribute}.

// Line with unknown attribute must be
// removed, but it's not.
Line with attribute like {nosuchattribute}.

Escaped: \{author} and +++{author}+++


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
////
TIP: You can use attributes to setup
     auto-generated table of contents, for ex.:
:toc:
:toclevels: 3

Then put this line where you want to
insert table of contents:
toc::[]

In GitHub Wiki table of contents looks worse
than in file, especially multi-level one,
because it include list bullets.
////
....

++++
</td><td class=cheatsheet-render>
++++

////
TIP: You can use attributes to setup
     auto-generated table of contents, for ex.:
:toc:
:toclevels: 3

Then put this line where you want to
insert table of contents:
toc::[]

In GitHub Wiki table of contents looks worse
than in file, especially multi-level one,
because it include list bullets.
////



++++
</td></tr>
++++


++++
</table>
++++

== Headers

++++
<table class=cheatsheet>
++++


++++
<tr><td class=cheatsheet-source>
++++

....
== Level 1
Text.

=== Level 2
Text.

==== Level 3
Text.

===== Level 4
Text.

....

++++
</td><td class=cheatsheet-render>
++++

== Level 1
Text.

=== Level 2
Text.

==== Level 3
Text.

===== Level 4
Text.


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Level 1
-------
Text.

Level 2
~~~~~~~
Text.

Level 3
^^^^^^^
Text.

Level 4
+++++++
Text.

....

++++
</td><td class=cheatsheet-render>
++++

Level 1
-------
Text.

Level 2
~~~~~~~
Text.

Level 3
^^^^^^^
Text.

Level 4
+++++++
Text.



++++
</td></tr>
++++


++++
</table>
++++

== Paragraphs

++++
<table class=cheatsheet>
++++


++++
<tr><td class=cheatsheet-source>
++++

....
// Paragraph title is not highlighted.
.Optional Title

Usual
paragraph.

Second paragraph.

....

++++
</td><td class=cheatsheet-render>
++++

// Paragraph title is not highlighted.
.Optional Title

Usual
paragraph.

Second paragraph.


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title

 Literal paragraph.
  Must be indented.

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title

 Literal paragraph.
  Must be indented.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title

[source,perl]
die 'connect: '.$dbh->errstr;

Not a code in next paragraph.

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title

[source,perl]
die 'connect: '.$dbh->errstr;

Not a code in next paragraph.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Type of block (NOTE/TIP/…) should be
// shown as an icon, not as text.
.Optional Title
NOTE: This is an example
      single-paragraph note.

....

++++
</td><td class=cheatsheet-render>
++++

// Type of block (NOTE/TIP/…) should be
// shown as an icon, not as text.
.Optional Title
NOTE: This is an example
      single-paragraph note.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[NOTE]
This is an example
single-paragraph note.

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[NOTE]
This is an example
single-paragraph note.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
TIP: Some tip text.

....

++++
</td><td class=cheatsheet-render>
++++

TIP: Some tip text.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
IMPORTANT: Some important text.

....

++++
</td><td class=cheatsheet-render>
++++

IMPORTANT: Some important text.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
WARNING: Some warning text.

....

++++
</td><td class=cheatsheet-render>
++++

WARNING: Some warning text.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
CAUTION: Some caution text.

....

++++
</td><td class=cheatsheet-render>
++++

CAUTION: Some caution text.



++++
</td></tr>
++++


++++
</table>
++++

== Blocks

++++
<table class=cheatsheet>
++++


++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
----
*Listing* Block

Use: code or file listings
----

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
----
*Listing* Block

Use: code or file listings
----



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[source,perl]
----
# *Source* block
# Use: highlight code listings
# (require `source-highlight` or `pygmentize`)
use DBI;
my $dbh = DBI->connect('...',$u,$p)
    or die "connect: $dbh->errstr";
----

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[source,perl]
----
# *Source* block
# Use: highlight code listings
# (require `source-highlight` or `pygmentize`)
use DBI;
my $dbh = DBI->connect('...',$u,$p)
    or die "connect: $dbh->errstr";
----



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Sidebar block isn't highlighted.
.Optional Title
****
*Sidebar* Block

Use: sidebar notes :)
****

....

++++
</td><td class=cheatsheet-render>
++++

// Sidebar block isn't highlighted.
.Optional Title
****
*Sidebar* Block

Use: sidebar notes :)
****



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Example block isn't highlighted.
.Optional Title
==========================
*Example* Block

Use: examples :)
==========================

// Example caption removed, not changed.
[caption="Custom: "]
==========================
Default caption "Example:"
can be changed.
==========================

....

++++
</td><td class=cheatsheet-render>
++++

// Example block isn't highlighted.
.Optional Title
==========================
*Example* Block

Use: examples :)
==========================

// Example caption removed, not changed.
[caption="Custom: "]
==========================
Default caption "Example:"
can be changed.
==========================


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[NOTE]
===============================
*NOTE* Block

Use: multi-paragraph notes.
===============================

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[NOTE]
===============================
*NOTE* Block

Use: multi-paragraph notes.
===============================



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
////
*Comment* block

Use: hide comments
////

....

++++
</td><td class=cheatsheet-render>
++++

////
*Comment* block

Use: hide comments
////



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
++++
*Passthrough* Block
<p>
Use: backend-specific markup like
<table border="1">
<tr><td>1</td><td>2</td></tr>
</table>
++++

....

++++
</td><td class=cheatsheet-render>
++++

++++
*Passthrough* Block
<p>
Use: backend-specific markup like
<table border="1">
<tr><td>1</td><td>2</td></tr>
</table>
++++



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
 .Optional Title
 ....
 *Literal* Block

 Use: workaround when literal
 paragraph (indented) like
   1. First.
   2. Second.
 incorrectly processed as list.
 ....

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title

....
*Literal* Block

Use: workaround when literal
paragraph (indented) like
  1. First.
  2. Second.
incorrectly processed as list.
....

++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Optional Title
[quote, cite author, cite source]
____
*Quote* Block

Use: cite somebody
____

....

++++
</td><td class=cheatsheet-render>
++++

.Optional Title
[quote, cite author, cite source]
____
*Quote* Block

Use: cite somebody
____




++++
</td></tr>
++++


++++
</table>
++++

== Text

++++
<table class=cheatsheet>
++++



++++
<tr><td class=cheatsheet-source>
++++

....
forced +
line break

....

++++
</td><td class=cheatsheet-render>
++++

forced +
line break



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
normal, _italic_, *bold*, +mono+.

``double quoted'', `single quoted'.

normal, ^super^, ~sub~.

....

++++
</td><td class=cheatsheet-render>
++++

normal, _italic_, *bold*, +mono+.

``double quoted'', `single quoted'.

normal, ^super^, ~sub~.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Command: `ls -al`

+mono *bold*+

`passthru *bold*`

....

++++
</td><td class=cheatsheet-render>
++++

Command: `ls -al`

+mono *bold*+

`passthru *bold*`



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Path: '/some/filez.txt', '.b'

....

++++
</td><td class=cheatsheet-render>
++++

Path: '/some/filez.txt', '.b'



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Colors and font size doesn't change.
[red]#red text# [yellow-background]#on yellow#
[big]#large# [red yellow-background big]*all bold*

....

++++
</td><td class=cheatsheet-render>
++++

// Colors and font size doesn't change.
[red]#red text# [yellow-background]#on yellow#
[big]#large# [red yellow-background big]*all bold*



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
Chars: n__i__**b**++m++[red]##r##

....

++++
</td><td class=cheatsheet-render>
++++

Chars: n__i__**b**++m++[red]##r##



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Comment

....

++++
</td><td class=cheatsheet-render>
++++

// Comment



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
(C) (R) (TM) -- ... -> <- => <= &#182;

....

++++
</td><td class=cheatsheet-render>
++++

(C) (R) (TM) -- ... -> <- => <= &#182;



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
''''

....

++++
</td><td class=cheatsheet-render>
++++

''''



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Differs from Asciidoc, but it's hard to say who's correct.
Escaped:
\_italic_, +++_italic_+++,
t\__e__st, +++t__e__st+++,
+++<b>bold</b>+++, $$<b>normal</b>$$
\&#182;
\`not single quoted'
\`\`not double quoted''

....

++++
</td><td class=cheatsheet-render>
++++

// Differs from Asciidoc, but it's hard to say who's correct.
Escaped:
\_italic_, +++_italic_+++,
t\__e__st, +++t__e__st+++,
+++<b>bold</b>+++, $$<b>normal</b>$$
\&#182;
\`not single quoted'
\`\`not double quoted''




++++
</td></tr>
++++


++++
</table>
++++

== Macros: links, images & include

++++
<table class=cheatsheet>
++++


If you'll need to use space in url/path you should replace it with %20.


++++
<tr><td class=cheatsheet-source>
++++

....
[[anchor-1]]
Paragraph or block 1.

// This type of anchor doesn't work
anchor:anchor-2[]
Paragraph or block 2.

<<anchor-1>>,
<<anchor-1,First anchor>>,
xref:anchor-2[],
xref:anchor-2[Second anchor].

....

++++
</td><td class=cheatsheet-render>
++++

[[anchor-1]]
Paragraph or block 1.

// This type of anchor doesn't work
anchor:anchor-2[]
Paragraph or block 2.

<<anchor-1>>,
<<anchor-1,First anchor>>,
xref:anchor-2[],
xref:anchor-2[Second anchor].



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Link "root" is root of repo.
link:README.adoc[This document]
link:README.adoc[]
link:/[This site root]

....

++++
</td><td class=cheatsheet-render>
++++

// Link "root" is root of repo.
link:README.adoc[This document]
link:README.adoc[]
link:/[This site root]



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
http://google.com
http://google.com[Google Search]
mailto:root@localhost[email admin]

....

++++
</td><td class=cheatsheet-render>
++++

http://google.com
http://google.com[Google Search]
mailto:root@localhost[email admin]



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
First home
image:images/icons/home.png[]
, second home
image:images/icons/home.png[Alt text]
.

.Block image
image::images/icons/home.png[]
image::images/icons/home.png[Alt text]

.Thumbnail linked to full image
image:/images/font/640-screen2.gif[
"My screenshot",width=128,
link="/images/font/640-screen2.gif"]

....

++++
</td><td class=cheatsheet-render>
++++

First home
image:images/icons/home.png[]
, second home
image:images/icons/home.png[Alt text]
.

.Block image
image::images/icons/home.png[]
image::images/icons/home.png[Alt text]

.Thumbnail linked to full image
image:/images/font/640-screen2.gif[
"My screenshot",width=128,
link="/images/font/640-screen2.gif"]



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// include\:\: is replaced with link:
This is example how files
can be included.

include::footer.txt[]

[source,perl]
----
include::script.pl[]
----

....

++++
</td><td class=cheatsheet-render>
++++

// include\:\: is replaced with link:
This is example how files
can be included.

include::footer.txt[]

[source,perl]
----
include::script.pl[]
----




++++
</td></tr>
++++


++++
</table>
++++

== Lists

++++
<table class=cheatsheet>
++++



++++
<tr><td class=cheatsheet-source>
++++

....
.Bulleted
* bullet
* bullet
  - bullet
  - bullet
* bullet
** bullet
** bullet
*** bullet
*** bullet
**** bullet
**** bullet
***** bullet
***** bullet
**** bullet
*** bullet
** bullet
* bullet

....

++++
</td><td class=cheatsheet-render>
++++

.Bulleted
* bullet
* bullet
  - bullet
  - bullet
* bullet
** bullet
** bullet
*** bullet
*** bullet
**** bullet
**** bullet
***** bullet
***** bullet
**** bullet
*** bullet
** bullet
* bullet



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Bulleted 2
- bullet
  * bullet

....

++++
</td><td class=cheatsheet-render>
++++

.Bulleted 2
- bullet
  * bullet



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Markers differs from Asciidoc.
.Ordered
. number
. number
  .. letter
  .. letter
. number
.. loweralpha
.. loweralpha
... lowerroman
... lowerroman
.... upperalpha
.... upperalpha
..... upperroman
..... upperroman
.... upperalpha
... lowerroman
.. loweralpha
. number

....

++++
</td><td class=cheatsheet-render>
++++

// Markers differs from Asciidoc.
.Ordered
. number
. number
  .. letter
  .. letter
. number
.. loweralpha
.. loweralpha
... lowerroman
... lowerroman
.... upperalpha
.... upperalpha
..... upperroman
..... upperroman
.... upperalpha
... lowerroman
.. loweralpha
. number


++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Ordered 2
a. letter
b. letter
   .. letter2
   .. letter2
       .  number
       .  number
           1. number2
           2. number2
           3. number2
           4. number2
       .  number
   .. letter2
c. letter

....

++++
</td><td class=cheatsheet-render>
++++

.Ordered 2
a. letter
b. letter
   .. letter2
   .. letter2
       .  number
       .  number
           1. number2
           2. number2
           3. number2
           4. number2
       .  number
   .. letter2
c. letter



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Labeled
Term 1::
    Definition 1
Term 2::
    Definition 2
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5

....

++++
</td><td class=cheatsheet-render>
++++

.Labeled
Term 1::
    Definition 1
Term 2::
    Definition 2
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Labeled 2
Term 1;;
    Definition 1
    Term 1.1::
        Definition 1.1

....

++++
</td><td class=cheatsheet-render>
++++

.Labeled 2
Term 1;;
    Definition 1
    Term 1.1::
        Definition 1.1



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Horizontal lists looks wrong.
[horizontal]
.Labeled horizontal
Term 1:: Definition 1
Term 2:: Definition 2
[horizontal]
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
[horizontal]
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
[horizontal]
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5

....

++++
</td><td class=cheatsheet-render>
++++

// Horizontal lists looks wrong.
[horizontal]
.Labeled horizontal
Term 1:: Definition 1
Term 2:: Definition 2
[horizontal]
    Term 2.1;;
        Definition 2.1
    Term 2.2;;
        Definition 2.2
Term 3::
    Definition 3
Term 4:: Definition 4
[horizontal]
Term 4.1::: Definition 4.1
Term 4.2::: Definition 4.2
[horizontal]
Term 4.2.1:::: Definition 4.2.1
Term 4.2.2:::: Definition 4.2.2
Term 4.3::: Definition 4.3
Term 5:: Definition 5



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
[qanda]
.Q&A
Question 1::
    Answer 1
Question 2:: Answer 2

....

++++
</td><td class=cheatsheet-render>
++++

[qanda]
.Q&A
Question 1::
    Answer 1
Question 2:: Answer 2



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Bug: (B) should be same level as (A)
.Indent is optional
- bullet
    * another bullet
        1. number
        .  again number (A)
            a. letter
            .. again letter

.. letter
. number (B)

* bullet
- bullet
....

++++
</td><td class=cheatsheet-render>
++++

// Bug: (B) should be same level as (A)
.Indent is optional
- bullet
    * another bullet
        1. number
        .  again number (A)
            a. letter
            .. again letter

.. letter
. number (B)

* bullet
- bullet

++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Break two lists
. number
. number

Independent paragraph break list.

. number

.Header break list too
. number

--
. List block define list boundary too
. number
. number
--

--
. number
. number
--

....

++++
</td><td class=cheatsheet-render>
++++

.Break two lists
. number
. number

Independent paragraph break list.

. number

.Header break list too
. number

--
. List block define list boundary too
. number
. number
--

--
. number
. number
--



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Continuation
- bullet
continuation
. number
  continuation
* bullet

  literal continuation

.. letter
+
Non-literal continuation.
+
----
any block can be

included in list
----
+
Last continuation.

....

++++
</td><td class=cheatsheet-render>
++++

.Continuation
- bullet
continuation
. number
  continuation
* bullet

  literal continuation

.. letter
+
Non-literal continuation.
+
----
any block can be

included in list
----
+
Last continuation.



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.List block allow sublist inclusion
- bullet
  * bullet
+
--
    - bullet
      * bullet
--
  * bullet
- bullet
  . number
    .. letter
+
--
      . number
        .. letter
--
    .. letter
  . number


....

++++
</td><td class=cheatsheet-render>
++++

.List block allow sublist inclusion
- bullet
  * bullet
+
--
    - bullet
      * bullet
--
  * bullet
- bullet
  . number
    .. letter
+
--
      . number
        .. letter
--
    .. letter
  . number





++++
</td></tr>
++++


++++
</table>
++++

== Tables

++++
<table class=cheatsheet>
++++


You can fill table from CSV file using +include::+ macros inside table.


++++
<tr><td class=cheatsheet-source>
++++

....
// Table footer doesn't highlighted.
.An example table
[options="header,footer"]
|=======================
|Col 1|Col 2      |Col 3
|1    |Item 1     |a
|2    |Item 2     |b
|3    |Item 3     |c
|6    |Three items|d
|=======================

....

++++
</td><td class=cheatsheet-render>
++++

// Table footer doesn't highlighted.
.An example table
[options="header,footer"]
|=======================
|Col 1|Col 2      |Col 3
|1    |Item 1     |a
|2    |Item 2     |b
|3    |Item 3     |c
|6    |Three items|d
|=======================



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Table width, frame and grid control doesn't work.
.CSV data, 15% each column
[format="csv",width="60%",cols="4"]
[frame="topbot",grid="none"]
|======
1,2,3,4
a,b,c,d
A,B,C,D
|======


....

++++
</td><td class=cheatsheet-render>
++++

// Table width, frame and grid control doesn't work.
.CSV data, 15% each column
[format="csv",width="60%",cols="4"]
[frame="topbot",grid="none"]
|======
1,2,3,4
a,b,c,d
A,B,C,D
|======




++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
// Table column align doesn't work.
[grid="rows",format="csv"]
[options="header",cols="^,<,<s,<,>m"]
|===========================
ID,FName,LName,Address,Phone
1,Vasya,Pupkin,London,+123
2,X,Y,"A,B",45678
|===========================

....

++++
</td><td class=cheatsheet-render>
++++

// Table column align doesn't work.
[grid="rows",format="csv"]
[options="header",cols="^,<,<s,<,>m"]
|===========================
ID,FName,LName,Address,Phone
1,Vasya,Pupkin,London,+123
2,X,Y,"A,B",45678
|===========================



++++
</td></tr><tr><td></td><td></td></tr>
++++

++++
<tr><td class=cheatsheet-source>
++++

....
.Multiline cells, row/col span
|====
|Date |Duration |Avg HR |Notes

|22-Aug-08 .2+^.^|10:24 | 157 |
Worked out MSHR (max sustainable
heart rate) by going hard
for this interval.

|22-Aug-08 | 152 |
Back-to-back with previous interval.

|24-Aug-08 3+^|none

|====

....

++++
</td><td class=cheatsheet-render>
++++

.Multiline cells, row/col span
|====
|Date |Duration |Avg HR |Notes

|22-Aug-08 .2+^.^|10:24 | 157 |
Worked out MSHR (max sustainable
heart rate) by going hard
for this interval.

|22-Aug-08 | 152 |
Back-to-back with previous interval.

|24-Aug-08 3+^|none

|====



++++
</td></tr>
++++


++++
</table>
++++

asciidoc demo.asciidoc

demo.asciidoc
= Document Title (Level 0)

== Level 1 Section Title

=== Level 2 Section Title

==== Level 3 Section Title

===== Level 4 Section Title

====== Level 5 Section Title

== Another Level 1 Section Title


= Image test

image::sunset.jpg[]

image::sunset.jpg[Sunset]

.A mountain sunset
[#img-sunset]
[caption="Figure 1: ",link=https://www.flickr.com/photos/javh/5448336655]
image::sunset.jpg[Sunset,300,200]

image::https://asciidoctor.org/images/octocat.jpg[GitHub mascot]

= Video
video::rPQoq7ThGAU[youtube]


.app.rb
[source,ruby]
----
require 'sinatra'

get '/hi' do
  "Hello World!"
end
----

asciidoc git을기반으g git-flow를사용하여애플리케이션배포버전을관리하자。

git을기반으g git-flow를사용하여애플리케이션배포버전을관리하자。

use-git-and-git-flow.adoc
# GIT을 기반으로 한 프로젝트 개발프로세스
김지헌, <ihoneymon@gmail.com>
v0.0.1, 08-12-2015

깃을 사용합시다. 깃을 쓰자. 깃을 쓰란 말야!!

** SVN은 변경이력이 많아질수록 속도가 느리지.
*** 커밋 및 처리속도가 빠르다. 변경이력이 많이 축적되어 있어도 속도저하가 거의 없다.

** 커밋찍기가 어렵다.
+
변경사항 개발이 아직 완료되지 않았는데 이 변경사항을 중간에 커밋할 수가 없어. 커밋을 찍으면 SVN 중앙저장소에 반영되잖아.
+
*** 그런거 신경쓰지마. 맘껏 커밋을 찍어. 기능개발이 완료되면 그 때 푸시해.
*** 필요하다면 브랜치를 땋아서 거기서 개발하고 커밋찍고 개발내용을 반영하고 싶으면 병합merge하고 푸쉬해.

** 변경이력을 어떻게 봐야하지?
*** 커밋을 찍은 로그를 별도로 쉽게볼 수 있고 다양한 도구를 이용해서 확인할 수 있어.

## 깃git 이란?
> Git is a link:http://git-scm.com/about/free-and-open-source[**free and open source**] distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

깃은 크고작은 프로젝트의 모든 것들을 빠르고 효과적으로 제어할 수 있도록 설계된 **무료 그리고 오픈소스 분산형 변경이력관리시스템** 이다.

> Git is link:http://git-scm.com/documentation[**easy to learn**] and has a link:http://git-scm.com/about/small-and-fast[tiny footprint with lightning fast performance]. It outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase with features like link:http://git-scm.com/about/branching-and-merging[cheap local branching], convenient link:http://git-scm.com/about/staging-area[staging areas], and multiple link:http://git-scm.com/about/distributed[workflows].

깃은 쉽게 배울 수 있으며 작은 크기에 비해 빛과 같은 빠른 속도를 자랑한다. 서브버전Subversion, CVS, Perforce 그리고 ClearCase 등의 기존 SCM 도구들을 능가하는 손쉬운 로컬 브랜칭, 편리한 스태징영역, 그리고 다중 업무흐름 기능을 제공한다.

link:http://git-scm.com/[git: --fast-version-control] 페이지에서 소개하는 '깃git'에 관한 설명이다. 이전의 변경이력관리도구들에 비해 훨씬 쉽고 저렴한 '브랜칭' 관리기능을 제공하여 변경이력관리에 관한 막강함을 자랑하고 있는 이 도구를 사용해보지 않겠는가?

## `git flow`을 기준으로 한 브랜치 전략
image:http://dogfeet.github.io/articles/2011/a-successful-git-branching-model/git-branching-model.png[git branch]

깃을 이용하면 '자연스레' '가지뻗기branch'하고 '커밋'하고 '병합merge'한다.

> 그 자유로운 브랜칭 사용방법에 '전략strategy'을 팀원들과 공유하여 만들고 있는 소프트웨어를 관리할 수 있다면 얼마나 좋을까?

깃플로우git-flow 전략은 소프트웨어의 소스코드를 관리하고 출시하기 위한 '브랜칭 관리 전략branch management strategy'이다. 꽤 오래전에 나온 전략이며 이 전략의 단점을 극복하기 위해 link:https://guides.github.com/introduction/flow/[github flow]와 link:https://about.gitlab.com/2014/09/29/gitlab-flow/[gitlab flow] 전략이 나왔다. 이 전략들을 그대로 적용할 수도 있지만 팀과 프로젝트 내에서 충분한 협의와 합의를 거친 후 사용하길 권한다.

우선은 기본전략이라할 수 있는 깃플로우git flow를 설명하고 깃플로우를 중심으로 하여 개발과정을 설명하여 숙달하도록 한다. 그 다음에 응용을 들어갈 수 있도록 해보자.

### 주요 브랜치
image:http://dogfeet.github.io/articles/2011/a-successful-git-branching-model/main-branches.png[main-branch]

#### *배포`master`* 브랜치
깃 사용자라면 누구나 익숙한 기본 브랜치다. 먼저 배포했거나 배포준비(production-ready)된 코드는 `origin/master`에 두고 관리한다.

`master` 브랜치에 \'병합' 한다는 것은 새버전을 배포한다는 것을 의미한다. `master` 브랜치에서 커밋될 때 git hook 스크립트를 걸어서 자동으로 빌드하여 운영서버로 배포하는 형식을 취한다.

#### *개발`develop`* 브랜치
다음에 배포하기 위해 개발하는 코드는 `origin/develop`에서 관리한다. 프로젝트를 진행하는 개발자들이 함께 보며 업무를 진행하는 브랜치이며 가장 역동적인 브랜치라고 할 수 있다. `develop` 브랜치의 코드가 안정화되고 배포할 준비가 되면 `master`로 \'병합'하고  배포버전으로 태그를 단다.

### 보조브랜치
배포를 준비하고, 이미 배포한 제품이나 서비스의 버그를 빠르게 해결(hotfix) 해야한다. 이 모든 것을 동시에 진행하기 위해서 다양한 브랜치가 필요하다.

#### *기능feature* 브랜치
[NOTE]
.기능 브랜치
===============================================================
* 시작브랜치: `develop`
* 병합대상 브랜치: `develop`
* 브랜치이름 규칙: `master`, `develop`, `release-*`, `hotfix-*` 를 제외한 것
===============================================================
image:http://dogfeet.github.io/articles/2011/a-successful-git-branching-model/merge-without-ff.png[git merge --no-ff]

기능`feature` 브랜치는 배포하려고 하는 **기능을 개발하는 브랜치**다. 기능을 개발하기 시작할 때는 언제 배포할 수 있을지 알 수 없다.  프로젝트를 진행하면서 애자일 방법론을 도입했다면, 스프린트 기간 동안에 개발해야할 기능이라면 스프린트 기간동안 개발해야할 브랜치를 말한다. 기능`feature` 브랜치는 그 기능을 다 완성할 때까지 유지하고 있다가 다 완성되면 개발`develop` 브랜치로 병합한다. 개발된 결과가 실망스럽거나 필요없을 때는 삭제하면 된다. 삭제하는 것에 미련을 가질 필요는 없다.

** git-flow 이용시 `feature/{branch-name}` 형식
** 이슈추적을 사용한다면 `feature/{issue-number}-{feature-name}` 형식
*** Ex) feature/1-init-project, feature/2-build-gradle-script-write

#### *출시release* 브랜치
[NOTE]
.출시 브랜치
===============================================================
* 시작브랜치: `develop`
* 병합대상 브랜치: `develop`, `master`
* 브랜치이름 규칙: `release-*`
===============================================================

출시`release` 브랜치는 실제 배포할 상태가 된 경우에 생성하는 브랜치다. 마스터`master` 브랜치를 통해 배포하기로 했으므로 먼저 출시`release`를 마스터`master` 브랜치로 병합한다. 나중에 이 배포버전을 찾기 쉽도록 태그를 만들어 현재 병합되는 커밋을 가리키도록 한다. 이 때 배포된 기능에 반영될 수 있도록 개발`develop` 브랜치에도 함께 병합한다.

#### *긴급수정hotfix* 브랜치
[NOTE]
.긴급수정 브랜치
===============================================================
* 시작브랜치: `master`
* 병합대상 브랜치: `develop`, `master`
* 브랜치이름 규칙: `hotfix-*`
===============================================================
image:http://dogfeet.github.io/articles/2011/a-successful-git-branching-model/hotfix-branches.png[hotfix branch]

미리 계획되지 않은 배포를 위한 브랜치다. 기본적인 동작방식은 출시`release`와 비슷하다. 이미 배포한 운영버전에서 발생한 문제를 해결하기 위해 만든다. 운영 버전에 생긴 치명적인 버그는 즉시 해결해야하기 때문에 문제가 생기면 마스터`master` 브랜치에 만들어둔 태그`tag`로부터 긴급수정`hotfix` 브랜치를 생성한다.

## 개발과정을 기준으로 한 깃 사용방법 설명
윈도우와 맥에서 사용할 수 있는 깃클라이언트로 많이 사용되는 link:https://www.sourcetreeapp.com/[소스트리sourceTree]를 기준으로 진행한다.

### 사전 확인사항
. 깃설정에서 \'**사용자명**'과 \'**이메일**' 정보가 입력되어 있는지 확인한다.
. 깃 저장소에 `master` 브랜치가 존재해야 한다.

### git flow 시작하기
. 저장소 선택 후 [깃 플로우] 버튼 클릭
+
.[깃 플로우] 버튼 클릭
image::./git-flow-001.png[Click git flow button]
+

. 깃플로우 목적별 브랜치 초기설정
+
image::./git-flow-002.png[git flow configuration]
+

. 깃플로우 생성
+
image::./git-flow-004.png[git flow init]
+

. 깃플로우 초기화 완료
+
image::./git-flow-005.png[git flow init complete]
+
깃플로우 초기화가 완료되면 `develop` 브랜치가 생성되어 체크아웃 상태가 되어 있다.

### 기능개발
다음과 같은 '74. 빌드'와 관련된 기능을 개발해야 한다고 가정해보자.
image::./git-flow-feature-001.png[TiDD]

#### *기능feature* 브랜치 생성
. 깃플로우버튼 클릭
. [새 기능 시작] 버튼을 클릭한다.
+
.[새 기능 시작] 버튼 클릭
image::./git-flow-feature-002.png[Click new feature branch]
+

. 기능명 입력후 [확인] 클릭
+
.feature branch name: 74-build
image::./git-flow-feature-003.png[new feature branch name]
+
.. 기능브랜치의 이름은 `[issue number]-[기능명 혹은 기능 서술]` 형태면 좋다.

. 기능브랜치가 생성된 것 확인
+
.feature branch complete
image::./git-flow-feature-004.png[feature/74-build branching complete]
+


#### *기능feature* 브랜치 내에서 커밋
커밋메시지에는 현재 개발하고 있는 기능과 관련된 이슈번호를 함께 넣어주는 것이 좋다. 커밋메시지에서 `# 메시지`의 경우에는 주석으로 인식하지만 `#이슈번호` 인 경우에는 깃 저장소 개발플랫폼에 따라 이슈와 연결시켜준다.

.commit message on feature branch
image::./git-flow-feature-005.png[commit message bind issue number]


> 깃메시지는 메시지만으로 해당 커밋에서 수행한 항목들을 이해할 수 있도록 설명하는 것이 좋고, 기능개발과정 중간중간 테스트를 통과시킨 후 커밋을 찍도록 한다.


#### *기능feature* 브랜치 완료
기능 개발이 완료되었다면,

. [깃 플로우 버튼] 클릭
. [기능 마무리] 클릭
+
.[기능 마무리] 버튼 클릭
image::./git-flow-feature-006.png[기능 마무리]
+


. \'기능 마무리' 항목에서 [확인] 버튼 클릭
+
.[확인] 클릭
image::./git-flow-feature-007.png[확인]
.feature 브랜치 완료화면
image::./git-flow-feature-008.png[feature branch complete]
+

.. \'develop 브랜치에 rebase'는 클릭하지 않는다.
... 개발자가 기능브랜치를 생성하여 작업한 이력을 남길 수 있어 추후 변경이력을 추적하는데 용이하다.
+
.rebase를 사용하지 않은 브랜치모습
image::./git-flow-feature-009.png[not use rebase]
+


### 출시
애플리케이션을 배포할 때가 되었다.

#### *출시release* 브랜치 생성
. [깃플로우] 버튼 클릭
. [새 릴리즈 시작] 버튼 클릭
+
.[새 릴리즈 시작] 버튼 클릭
image::./git-flow-release-001.png[Click new release]
+

. 출시버전 입력 후 [확인] 버튼 클릭
+
.출시버전 입력
image::./git-flow-release-002.png[input release version]
.출시브랜치 생성명령 실행화면
image::./git-flow-release-003.png[create release branch]
+

. 출시버전 브랜치 생성확인
+
.출시버전 브랜치 생성확인
image::./git-flow-release-004.png[release branch complete]
+


#### 출시에 필요한 기능들 합치기
출시 브랜치를 생성한 후에 기능을 추가적으로 개발한 경우들이 발생할 수 있다. 그럴 경우에는 기능개발을 완료하고 `develop` 브랜치에 병합한 후에 `develop` 브랜치를 출시 브랜치로 병합하면 되겠다.

#### *출시release* 브랜치 완료
. [깃 플로우] 버튼 클릭
. [릴리즈 마무리] 버튼 클릭
+
.[릴리즈 마무리] 버튼 클릭
image::./git-flow-release-005.png[Click release finish]
+

. \'릴리즈 마무리' [확인] 클릭
+
.\'릴리즈 마무리' [확인] 클릭
image::./git-flow-release-006.png[click confirm]
+


### *긴급수정hotfix* 브랜치
출시이후 긴급수정해야할 내용이 발생하는 경우 출시버전에 맞춰서 핫픽스를 생성한다.

#### *긴급수정hotfix* 브랜치 생성
. [깃 플로우] 버튼 클릭
. [새 핫픽스 시작] 버튼 클릭
+
.[새 핫픽스 시작] 버튼 클릭
image::./git-flow-hotfix-001.png[click new hotfix]
+

. 핫픽스이름 입력 후 [확인]
+
.핫픽스이름 입력 후 [확인]
image::./git-flow-hotfix-002.png[hotfix name]
+


#### *긴급수정hotfix* 작업진행

#### *긴급수정hotfix* 브랜치 완료
. [깃 플로우] 버튼 클릭
. [핫픽스 마무리] 버튼 클릭
+
.[핫픽스 마무리] 버튼 클릭
image::./git-flow-hotfix-003.png[hotfix finish]
+

. \'핫픽스 마무리' [확인] 버튼 클릭
+
.\'핫픽스 마무리' [확인] 버튼 클릭
image::./git-flow-hotfix-004.png[hotfix complete]
+

. 긴급수정 브랜치 처리 완료
+
.긴급수정 브랜치 처리 완료 실행기록
image::./git-flow-hotfix-005.png[hotfix log]
.긴급수정 브랜치 처리 완료후
image::./git-flow-hotfix-006.png[hotfix complete]
+


### 프로젝트 참여자들 강요사항
* 개발, 변경사항에 대한 테스트 통과 후 커밋을 찍고 밀어넣는다.

## 로컬-개발서버-테스트서버-운영서버 전략
* 로컬, 개발서버는 개발`develop`브랜치를 기준으로 빌드-배포한다.
* 테스트-운영서버는 마스터`master` 브랜치를 기준으로 빌드-배포한다.

### 브랜치 이용 관리전략
* 프로젝트에 참여중인 개발자들은 개발`develop` 브랜치를 기준으로 업무를 진행한다.
* 마스터`master` 브랜치는 아키텍트, 프로젝트리더 혹은 선임개발자가 관리한다.
* 사전에 협의되지 않은 내용은 마스터`master` 브랜치에 반영하지 않는다.

## 정리
깃플로우`git flow`는 깃에서 제공하는 강력한 브랜칭 기능을 활용한 변경이력 관리 전략이다. 이 전략은 상황에 따라서 다양한 변화가 가능하다. 깃플로우는 다양한 변형된 형태의 전략을 세울수 있으며 깃헙github, 깃랩gitlab 에서 제공하는 방식이 있다.

> 중요한 것은 팀원, 프로젝트 구성원이 깃플로우 전략에 대해서 숙지하고 어떻게 사용할지를 결정하여 함께 공유하고 있어야 한다는 것이다.

> 깃이 제공하는 손쉬운 브랜치생성 기능을 활용하자.

## 참고문헌
* link:http://dogfeet.github.io/articles/2011/a-successful-git-branching-model.html[A successful git branching model]
* github flow
+
Release가 분명하지 않은 경우엔 확실히 git-flow를 적용하기 어렵다. - 이것이 깃플로우가 가지고 있는 단점!
+
** link:https://guides.github.com/introduction/flow/[Introduction github flow]
** link:https://dogfeet.github.io/articles/2011/github-flow.html[github flow]
* gitlab flow
** link:https://about.gitlab.com/2014/09/29/gitlab-flow/[gitlab flow]

asciidoc 关于Storm + Trident调整的注释

关于Storm + Trident调整的注释

very_rough_notes_on_batch_lifecycle.asciidoc
=== Lifecycle of a Record

==== Components

* **supervisor**
  - JVM process launched on each storm worker machine. Does not execute your code -- supervises it.
  - number of workers set by number of `supervisor.slots.ports`

* **worker**
  - jvm process launched by the supervisor
  - intra-worker transport is more efficient, so run one worker per topology per machine
  - if worker dies, supervisor will restart
  
* **Coordinator** generates new transaction ID
  - figures out what kafka hosts
  - sends tuple, which influences spout to dispatch a new batch
  - each transaction ID corresponds identically to single trident batch and vice-versa
  - Transaction IDs for a given topo_launch are serially incremented globally.
  - knows about Zookeeper /transactional; so it recovers the transaction ID

* **Kafka Spout** -- suppose 6 kafka spouts (3 per worker, 2 workers), reading from 24 partitions
  - each spout would ping 4 partitions assigned to it, pulling in `max_fetch_size` bytes from each: so we would get `12 * max_fetch_size` bytes on each worker, `24 * max_fetch_size` bytes in each batch
  - Each record becomes one kafka message, which becomes exactly one tuple
  - In our case, incoming records are about 1000 bytes, and messages add a few percent of size. (4000 records takes 4_731_999 bytes, which fits in a 5_000_000 max_fetch_size request).
  - Each trident batch is assembled in parallel across all spouts
  - So trident batch size is
    - `spout_batch_kb     ~= max_fetch_size * kafka_machines * kpartitions_per_broker / 1024`
    - `spout_batch_tuples ~= spout_batch_kb * 1024 / bytes_per_record`
    - `record_bytes       ~= 1000 bytes`

* **Executor**
  - Each executor is responsible for one bolt or spout
  - so with 3 kafka spouts on a worker, there are three executors spouting



==== Storm Transport

Each executor (bolt or spout) has two disruptor queues: its 'send queue' (the individual tuples it emits) and its 'receive queue' (batches of tuples staged for processing)footnote:[It might seem odd that the spout has a receive queue, but much of storm's internal bookkeeping is done using tuples -- there's actually a regular amount of traffic sent to each spout].

===== Disruptor Queue

At the heart

===== Spout Tuple Handling

* If the spout executor's async-loop decides conditions are right, it calls the spout's `nextTuple()` method.
* The spout can then emit zero, one or many tuples, which the emitter publishes non-blocking into the spout's executor send queue (see below for details).
* Each executor send queue (spout or bolt) has an attached router (`transfer-fn`). In an infinite loop, it
  - lays claim to all messages currently in the queue (everything between its last-read position and the write head), and loads them into a local tuple-batch.
  - sorts tuples into two piles: local ones, destined for tasks on this worker; and remote ones, destined for tasks on other workers.
  - all the remote tuples are published (blocking) as a single batch into the worker's transfer queue; they'll be later sent over the network each to the appropriate worker
  - the router regroups the tuples by task, and publishes (blocking) each tuple-batch into that task's executor receive buffer.
  Note that the executor send queue holds individual _tuples_, where as the worker transfer queue and executor receive queues hold _collections of tuples_. An executor send queue size of 1024 slots with an executor receive queue size of 2048 slots means there won't ever be more than `2048 * 1024` tuples waiting for that executor to process. It's important also to recognize that, although the code uses the label `tuple-batch` for these collections of tuples, they have nothing to do with the higher-level concept of a 'Trident batch' you'll meet later.

===== Bolt Tuple Handling



===== Worker Transfer and Receive Handlers


Unlike the transfer and the executor queues, the worker's receive buffer is a ZeroMQ construct, not a disruptor queue

==== Acking In Storm

* Noah is processed, produces Ham and Shem. Ack clears Noah, implicates Ham and Shem
* Shem is processed, produces Abe. Ack clears Shem, implicates Abe
* Ham is processed, produces non;e. Ack clears Ham


* Alice does a favor for Bob and Charlie. Alice is now in the clear; Bob and Charlie owe
* 


* For every record generated, send it to the acker
* Who keeps it in a table
* For every record completed, send it to the acker
* Who removes it from the table
* Maintain tickets in a tree structure so you know what to retry

Instead,

* When the tuple tree is created, send an ack-init: the clan id along with its edge checksum
* When each tuple is successfully completed, send an ack holding two sixty-four bit numbers: the tupletree id, and the XOR of its edge id and all the edge ids it generated. Do this for each of its tupletree ids.
* The acker holds a single O(1) lookup table
    - it's actually a set of lookup tables: current, old and dead. new tuple trees are added to the current bucket; every timeout number of seconds, current becomes old, and old becomes dead -- they are declared failed and their records retried.
* The spout holds the original tuple until it receives notice from the acker. The spout won't fetch more than the max-pending number of tuples: this is to protect the spout against memory pressure , and the downstream system against congestion.



When a tuple is born in the spout,

* creates a `root-id` -- this will identify the tuple tree. Let's say it had the value `3`.
* for all the places the tuple will go, makes an `edge-id` (`executor.clj:465`)
  - set the ack tree as `{ root_id: edge_id }`. Say the tuple was to be sent to three places; it would call `out_tuple(... {3: 100})`, `out_tuple(... {3: 101})`, `out_tuple(... {3: 102})`.
* XORs all the edge_id's together to form a partial checksum: `100 ^ 101 ^ 102`.
* sends an `init_stream` tuple to the acker as `root_id, partial_checksum, spout_id`
* the tuple's `ack val` starts at zero.

When a tuple is sent from a bolt, it claims one or more anchors (the tuples it came from), and one or more destination task ids.


===== Acker Walkthrough

When a tuple is born in the spout,

* creates a `root-id` -- this will identify the tuple tree. Let's say it had the value `3`.
* for all the places the tuple will go, makes an `edge-id` (`executor.clj:465`)
  - set the ack tree as `{ root_id: edge_id }`. Say the tuple was to be sent to three places; it would call `out_tuple(... {3: 100})`, `out_tuple(... {3: 101})`, `out_tuple(... {3: 102})`.
* XORs all the edge_id's together to form a partial checksum: `100 ^ 101 ^ 102`.
* sends an `init_stream` tuple to the acker as `root_id, partial_checksum, spout_id`
* the tuple's `ack val` starts at zero.

When a tuple is sent from a bolt, it claims one or more anchors (the tuples it came from), and one or more destination task ids.

[[acker_lifecycle_simple]]
.Acker Lifecycle: Simple
[cols="1*<.<d,1*<.<d,1*<.<d",options="header"]
|=======
| Event    		 	| Tuples			    	| Acker Tree
| spout emits one tuple to bolt-0 	| noah:   `<~,     { noah: a  }>`   	|
| spout sends an acker-init tuple, seeding the ack tree with `noah: a`
                                       	|                                 	| `{ noah: a }`
| bolt-0 emits two tuples to bolt-1 anchored on `noah`. Those new tuples each create an edge-id for each anchor, which is XORed into the anchor's `ackVal` and used in the new tuple's message-id.
                                        | shem: `<~,       { noah: b  }>` + 
                                          ham:  `<~,       { noah: c  }>` + 
                                          noah: `<b^c,     { noah: a  }>` 	|
| bolt-0 acks acks `noah` using the XOR of its ackVal and tuple tree: `noah: a^b^c`. Since `a^a^b^c = b^c`, this clears off the key `a`, but implicates the keys `b` and `c` -- the tuple tree remains incomplete.
                                      	|                                    	| `{ noah: b^c }`
| bolt-1 processes `shem`, emits `abe` to bolt-2
                                       	| abe:    `<~,     { noah: d  }>` + 
                                     	  shem:   `<d,     { noah: b  }>`  	|
| bolt-1 acks `shem` with `noah: d^b`  	|                                      	| `{ noah: c^d }`
| bolt-1 processes `ham`, emits nothing	| ham:    `<~,     { noah: c  }>`	|
| bolt-1 acks `ham` with `noah: c`   	|                                   	| `{ noah: d }`
| bolt-1 processes `abe`, emits nothing	| abe:    `<~,     { noah: d  }>`	|
| bolt-1 acks `abe` with `noah: d`	|                                  	| `{ noah: 0 }`
| acker removes noah from ledger, notifies spout
                                        |                                    	| `{}`
|	|	|
| `______________________`            	| `______________________________`	| `___________________`
|=======

===== Acker

* Acker is just a regular bolt -- all the interesting action takes place in its execute method.
* it knows
  - id == `tuple[0]` (TODO what is this)
  - the tuple's stream-id
  - there is a time-expiring data structure, the `RotatingHashMap`
    - it's actually a small number of hash maps;
    - when you go to update or add to it, it performs the operation on the right component HashMap.
    - periodically (when you receive a tick tuple), it will pull off oldest component HashMap, mark it as dead; invoke the expire callback for each element in that HashMap.
* get the current checksum from `pending[id]`.

pending has objects like `{ val: "(checksum)", spout_task: "(task_id)" }`

* when it's an ACKER-INIT-STREAM
  `pending[:val] = pending[:val] ^ tuple[1]`
*


pseudocode

    class Acker < Bolt

  def initialize
	  self.ackables = ExpiringHash.new
	end

  	def execute(root_id, partial_checksum, from_task_id)
	  stream_type = tuple.stream_type
	  ackables.expire_stalest_bucket if (stream_type == :tick_stream)
	  curr = ackables[root_id]

	  case stream_type
	  when :init_stream
	    curr[:val]        = (curr[:val]	|| 0) ^ partial_checksum
	    curr[:spout_task] = from_task_id
	  when :ack_stream
	    curr[:val]        = (curr[:val]	|| 0) ^ partial_checksum
	  when :fail_stream
	    curr[:failed]     = true
	  end

	  ackables[root_id] = curr

	  if    curr[:spout_task] && (curr[:val] == 0)
	    ackables.delete(root_id)
	    collector.send_direct(curr[:spout_task], :ack_stream, [root_id])
	  elsif curr[:failed]
	    ackables.delete(root_id)
	    collector.send_direct(curr[:spout_task], :fail_stream, [root_id])
	  end

	  collector.ack # yeah, we have to ack as well -- we're a bolt
	end

    end






===== A few details

There's a few details to clarify:

First, the spout must never block when emitting -- if it did, critical bookkeeping tuples might get trapped, locking up the flow. So its emitter keeps an "overflow buffer", and publishes as follows:

* if there are tuples in the overflow buffer add the tuple to it -- the queue is certainly full.
* otherwise, publish the tuple to the flow with the non-blocking call. That call will either succeed immediately ...
* or fail with an `InsufficientCapacityException`, in which case add the tuple to the overflow buffer

The spout's async-loop won't call `nextTuple` if overflow is present, so the overflow buffer only has to accomodate the maximum number of tuples emitted in a single `nextTuple` call.



===== Code Locations

Since the Storm+Trident code is split across multiple parent directories, it can be hard to track where its internal logic lives. Here's a guide to the code paths as of version `0.9.0-wip`.

[[storm_transport_code]]
.Storm Transport Code
|=======
| Role			 	| source path				    	|
| `async-loop`		 	| `clj/b/s/util.clj`		    	|
| Spout instantiation	 	| `clj/b/s/daemon/executor.clj`  	| `mk-threads :spout`
| Bolt instantiation	 	| `clj/b/s/daemon/executor.clj`  	| `mk-threads :bolt`
| Disruptor Queue facade 	| `clj/b/s/disruptor.clj` and `jvm/b/s/utils/disruptor.java`  	|
| Emitter->Send Q logic	 	| `clj/b/s/daemon/executor.clj`  	| `mk-executor-transfer-fn`
| Router (drains exec send Q)	| `clj/b/s/daemon/worker.clj`	    	| `mk-transfer-fn`	| infinite loop attached to each disruptor queue
| Local Send Q -> exec Rcv Q 	| `clj/b/s/daemon/worker.clj`	    	| `mk-transfer-local-fn`	| invoked within the transfer-fn and receive thread
| Worker Rcv Q -> exec Rcv Q 	| `clj/b/s/messaging/loader.clj` 	| `launch-receive-thread!`	| Worker Rcv Q -> exec Rcv Q
| Trans Q -> zmq	 	| `clj/b/s/daemon/worker.clj`	    	| `mk-transfer-tuples-handler`
| `..`			 	| `clj/b/s/daemon/task.clj`	    	|
| `..`			 	| `clj/b/s/daemon/acker.clj`	    	|
| `..`			 	| `clj/b/s/`			    	|
|=======


=== More on Transport


* **Queues between Spout and Wu-Stage**: exec.send/transfer/exec.receive buffers
  - output of each spout goes to its executor send buffer
  - router batches records destined for local executors directly to their receive disruptor Queues, and records destined for _all_ remote workers in a single m-batch into this worker's transfer queue buffer.
  - ?? each spout seems to match with a preferred downstream executor
    **question**: does router load _all_ local records, or just one special executors', directly send buf=> receive buf
  - IMPLICATION: If you can, size the send buffer to be bigger than `(messages/trident batch)/spout` (i.e., so that each executor's portion of a batch fits in it).
  - router in this case recognizes all records are local, so just deposits each m-batch directly in wu-bolt's exec.receive buffer.
  - The contents of the various queues live in memory, as is their wont. IMPLICATION: The steady-state size of all the various buffers should fit in an amount of memory you can afford. The default worker heap size is fairly modest -- ??768 MB??.

* **Wu-bolt** -- suppose 6 wu-bolts (3 per worker, 2 workers)
  - Each takes about `8ms/rec` to process a batch.
  - As long as the pipeline isn't starved, this is _always_ the limit of the flow. (In fact, let's say that's what we mean by the pipeline being starved)
  - with no shuffle, each spout's records are processed serially by single wukong doohickey
  - IMPLICATION: max spout pending must be larger than `(num of wu-bolt executors)` for our use case. (There is controversy about how _much_ larger; initially, we're going to leave this as a large multiple).

* **Queues between Wu stage and State+ES stage**
  - each input tuple to wu-stage results in about 5x the number of output tuples
  - If ??each trident batch is serially processed by exactly one wukong ruby process??, each wu executor outputs `5 * adacts_per_batch`
  - IMPLICATION: size exec.send buffer to hold an wu-stage-batch's worth of output tuples.

* **Group-by guard**
  - records are routed to ES+state bolts uniquely by group-by key.
  - network transfer, and load on the transfer buffer, are inevitable here
  - IMPLICATION: size transfer buffer comfortably larger than `wukong_parallelism/workers_count`

* **ES+state bolt** -- Transactional state with ES-backed cache map.
  - each state batch gets a uniform fraction of aggregables
  - tuple tree for each initial tuple (kafka message) exhausts here, and the transaction is cleared.
  - the batch's slot in the pending queue is cleared.
  - we want `(time to go thru state-bolt) * (num of wu-bolt executors) < (time to go thru one wu-bolt)`, because we do not want the state-bolt stage to be the choking portion of flow.

* **Batch size**:
  - _larger_: a large batch will condense more in the aggregation step -- there will be proportionally fewer PUTs to elasticsearch per inbound adact
  - _larger_: saving a large batch to ES is more efficient per record (since batch write time increases slowly with batch size)
  - _smaller_: the wu-stage is very slow (8ms/record), and when the flow starts the first wave of batches have to work through a pipeline bubble. This means you must size the processing timeout to be a few times longer than the wu-stage time, and means the cycle time of discovering a flow will fail is cumbersome.
  - IMPLICATION: use batch sizes of thousands of records, but keep wukong latency under 10_000 ms.
    - initially, more like 2_000 ms

* **Transactionality**: If any tuple in a batch fails, all tuples in that batch will be retried.
  - with transactional (non-opaque), they are retried for sure in same batch.
  - with opaque transactional, they might be retried in different or shared batches.


==== Variables

	  storm_machines               --       4 ~~ .. How fast you wanna go?
	  kafka_machines               --       4 ~~ .. see `kpartitions_per_broker`
	  kpartitions_per_broker       --       4 ~~ .. such that `kpartitions_per_broker * kafka_machines` is a strict multiple of `spout_parallelism`.
	  zookeeper_machines           --       3 ~~ .. three, for reliability. These should be very lightly loaded
	  workers_per_machine          --       1 ~~ ?? one per topology per machine -- transport between executors is more efficient when it's in-worker
	  workers_count                --       4 ~~ .. `storm_machines * workers_per_machine`

	  spouts_per_worker	       --       4 ~~ .. same as `wukongs_per_worker` to avoid shuffle
	  wukongs_per_worker	       --       4 ~~ .. `cores_per_machine / workers_per_machine` (or use one less than cores per machine)
	  esstates_per_worker          --       1 ~~ .. 1 per worker: large batches distill aggregates more, and large ES batch sizes are more efficient, and this stage is CPU-light.
	  shuffle between spout and wu --   false ~~ .. avoid network transfer

	  spout_parallelism	       --       4 ~~ .. `workers_count * spouts_per_worker`
	  wukong_parallelism	       --      16 ~~ .. `workers_count * wukongs_per_worker`
	  esstate_parallelism          --       4 ~~ .. `workers_count * esstates_per_worker`

	  wu_batch_ms_target           --     800 ~~ .. 800ms processing time seems humane. Choose high enough to produce efficient batches, low enough to avoid timeouts, and low enough to make topology launch humane.
	  wu_tuple_ms                  --       8 ~~ .. measured average time for wu-stage to process an adact
	  adact_record_bytes           --    1000 ~~ .. measured average adact bytesize.
	  aggregable_record_bytes      --     512 ~~ ?? measured average aggregable bytesize.
	  spout_batch_tuples           --    1600 ~~ .? `(wu_batch_ms_target / wu_tuple_ms) * wukong_parallelism`
	  spout_batch_kb               --    1600 ~~ .. `spout_batch_tuples * record_bytes / 1024`
	  fetch_size_bytes             -- 100_000 ~~ .. `spout_batch_kb * 1024 / (kpartitions_per_broker * kafka_machines)`

	  wukong_batch_tuples          --    8000 ~~ ?? about 5 output aggregables per input adact
	  wukong_batch_kb              --      xx ~~ ?? each aggregable is about yy bytes

	  pending_ratio                --       2 ~~ .. ratio of pending batch slots to workers; must be comfortably above 1, but small enough that `adact_batch_kb * max_spout_pending << worker_heap_size`
	  max_spout_pending            --      32 ~~ .. `spout_pending_ratio * wukong_parallelism`

	  worker_heap_size_mb          --     768 ~~ .. enough to not see GC activity in worker JVM. Worker heap holds counting cache map, max_spout_pending batches, and so forth
	  counting_cachemap_slots      --   65535 ~~ .. enough that ES should see very few `exists` GET requests (i.e. very few records are evicted from counting cache)

	  executor_send_slots	       --   16384 ~~ .. (messages)  larger than (output tuples per batch per executor). Must be a power of two.
	  transfer_buffer_mbatches     --      32 ~~ ?? (m-batches) ?? some function of network latency/thruput and byte size of typical executor send buffer. Must be a power of two.
	  executor_receive_mbatches    --   16384 ~~ ?? (m-batches) ??. Must be a power of two.
	  receiver_buffer_mbatches     --       8 ~~ .. magic number, leave at 8. Must be a power of two.

	  trident_batch_ms             --     100 ~~ .. small enough to ensure continuous processing
	  spout_sleep_ms               --      10 ~~ .. small enough to ensure continuous processing; in development, set it large enough that you're not spammed with dummy transactions (eg 2000ms)

	  scheduler                    --    isol ~~ .. Do not run multiple topologies in production without this

==== Refs

* http://www.slideshare.net/lukjanovsv/twitter-storm?from_search=1
tuning_storm_trident.asciidoc
=== Tuning Storm+Trident

Tuning a dataflow system is easy: 

----
The First Rule of Dataflow Tuning:
* Ensure each stage is always ready to accept records, and
* Deliver each processed record promptly to its destination
----

That may seem insultingly simplistic, but my point is that a) if you respect the laws of physics and economics, you can make your dataflow obey the First Rule; b) once your dataflow does obey the First Rule, stop tuning it.

Outline:

* Topology; Little's Law
  - skew
* System: machines; workers/machine, machine sizing; (zookeeper, kafka sizing)
* Throttling: batch size; kafka-partitions; max pending; trident batch delay; spout delay; timeout
* Congestion: number of ackers; queue sizing (exec send, exec recv, transfer)
* Memory: Max heap (Xmx), new gen/survivor size; (queue sizes)
* Ulimit, other ntwk sysctls for concurrency and ntwk; Netty vs ZMQ transport; drpc.worker.threads;
* Other important settings: preferIPv4; `transactional.zookeeper.root` (parent name for transactional state ledger in Zookeeper); `` (java options passed to _your_ worker function), `topology.worker.shared.thread.pool.size`
* Don't touch: `zmq.hwm` (unless you are seeing unreliable network trnsport under bursty load), disruptor wait strategy, worker receive buffer size,  `zmq.threads`

==== Goal

First, identify your principal goal: latency, throughput, memory or cost. We'll just discuss latency and throughput as goals -- tuning for cost means balancing the throughput (records/hour per machine) and cost of infrastructure (amortized $/hour per machine), so once you've chosen your hardware, tuning for cost is equivalent to tuning for throughput. I'm also going to concentrate on typical latency/throughput, and not on variance or 99th percentile figures or somesuch.

Next, identify your dataflow's principal bottleneck, the constraining resource that most tightly bounds the performance of its slowest stage. A dataflow can't pass through more records per second than the cumulative output of its most constricted stage, and it can't deliver records in less end-to-end time than the stage with the longest delay.

The principal bottleneck may be:

* _IO volume_:  there's a hardware bottleneck to the number of bytes per second that a machine's disks or network connection can sustain. Event log processing often involves large amounts of data requiring only parsing or other trivial transformations before storage -- throughput of such dataflows are IO bound.
* _CPU_: a CPU-bound flow spends more time in calculations to process a record
* _concurrency_: network requests to an external resource often require almost no CPU and minimal volume. If your principal goal is throughput, the flow is only bound by how many network requests you can make in parallel.
* _remote rate bottleneck bound_: alternatively, you may be calling an external resource that imposes a maximum throughput out of your control. A legacy datastore might only be able to serve a certain volume of requests before its performance degrades, or terms-of-service restrictions from a third-party web API (Google's Geolocation API.)
* _memory_: large windowed joins or memory-intensive analytics algorithms may require so much RAM it defines the machine characteristics

==== Initial tuning

If you're memory-bound, use machines with lots of RAM. Otherwise, start tuning on a machine with lots of cores and over-provision the RAM, we'll optimize the hardware later.

For a CPU-bound flow:

* Construct a topology with parallelism one
* set max-pending to one, use one acker per worker, and ensure that storm's `nofiles` ulimit is large (65000 is a decent number).
* Set the trident-batch-delay to be comfortably larger than the end-to-end latency -- there should be a short additional delay after each batch completes. 
* Time the flow through each stage.
* Increase the parallelism of CPU-bound stages to nearly saturate the CPU, and at the same time adjust the batch size so that state operations (aggregates, bulk database reads/writes, kafka spout fetches) don't slow down the total batch processing time.
* Keep an eye on the GC activity. You should see no old-gen or STW GCs, and efficient new-gen gcs (your production goal no more than one new-gen gc every 10 seconds, and no more than 10ms pause time per new-gen gc, but for right now just overprovision -- set the new-gen size to give infrequent collections and don't worry about pause times).

Once you have roughly dialed in the batch size and parallelism, check in with the First Rule. The stages upstream of your principal bottleneck should always have records ready to process. The stages downstream should always have capacity to accept and promptly deliver processed records.

==== Provisioning

Use one worker per topology per machine: storm passes tuples directly from sending executor to receiving executor if they're within the same worker. Also set number of ackers equal to number of workers -- the default of one per topology never makes sense (future versions of Storm will fix this).

Match your spout parallelism to its downstream flow. Use the same number of kafka partitions as kafka spouts (or a small multiple). If there are more spouts than kafka machines*kpartitions, the extra spouts will sit idle.

For CPU-bound stages, set one executor per core for the bounding stage (or one less than cores at large core count). Don't adjust the parallelism without reason -- even a shuffle implies network transfer. Shuffles don't impart any load-balancing.

For map states or persistentAggregates -- things where results are accumulated into memory structures -- allocate one stage per worker. Cache efficiency and batch request overhead typically improve with large record set sizes.

===== Concurrency Bound

In a concurrency bound problem, use very high parallelism
If possible, use a QueryFunction to combine multiple queries into a batch request.

===== Sidebar: Little's Law

* `Throughput (recs/s) = Capacity / Latency`
* you can't have better throughput than the collective rate of your slowest stage;
* you can't have better latency than the sum of the individual latencies.
    
If all records must pass through a stage that handles 10 records per second, then the flow cannot possibly proceed faster than 10 records per second, and it cannot have latency smaller than 100ms (1/10)

* with 20 parallel stages, the 95th percentile latency of your slowest stage becomes the median latency of the full set. (TODO: nail down numbers)


==== Batch Size

Set the batch size to optimize the throughput of your most expensive batch operation -- a bulk database operation, network request, or intensive aggregation. (There might instead be a natural batch size: for example the twitter `users/lookup` API call returns information on up to 100 distinct user IDs.)

===== Kafka Spout: Max-fetch-bytes

The batch count for the Kafka spout is controlled indirectly by the max fetch bytes. The resulting total batch size is at most `(kafka partitions) * (max fetch bytes)`.

For example, given a topology with six kafka spouts and four brokers with three kafka-partitions per broker, you have twelve kafka-partitions total, two per spout. When the MBCoordinator calls for a new batch, each spout produces two sub-batches (one for each kafka-partition), each into its own trident-partition. Now also say you have records of 1000 +/- 100 bytes, and that you set max-fetch-bytes to 100_000. The spout fetches the largest discrete number of records that sit within max-fetch-bytes -- so in this case, each sub-batch will have between 90 and 111 records. That means the full batch will have between 1080 and 1332 records, and 1_186_920 to 1_200_000 bytes.

===== Choosing a value

* `each()` functions should not care about batch size.
* `partitionAggregate`, `partitionPersist`, `partitionQuery` do.

Typically, you'll find that there are three regimes:

1. when it's too small, response time is flat -- it's dominated by bookeeping.
2. it then grows slowly with batch size. For example, a bulk put to elasticsearch will take about 200ms for 100 records, about 250ms for 1000 records, and about 300ms for 2000 records (TODO: nail down these numbers).
3. at some point, you start overwhelming some resource on the other side, and execution time increases sharply.

Since the execution time increases slowly in case (2), you get better and better records-per-second throughput. Choose a value that is near the top range of (2) but comfortably less than regime (3).

===== Executor send buffer size

Don't worry about this setting until most other things stabilize -- it's mostly important for ensuring that a burst of records doesn't clog the send queue.

Set the executor send buffer to be larger than the batch record count of the spout or first couple stages. Since it applies universally, don't go crazy with this value. It has to be an even power of two (1024, 2048, 4096, 8192, 16384).

==== Garbage Collection and other JVM options

Our worker JVM options:

	worker.childopts: >-
	    -Xmx2600m -Xms2600m -Xss256k -XX:MaxPermSize=128m -XX:PermSize=96m
	    -XX:NewSize=1000m -XX:MaxNewSize=1000m -XX:MaxTenuringThreshold=1 -XX:SurvivorRatio=6
	    -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled
	    -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
	    -server -XX:+AggressiveOpts -XX:+UseCompressedOops -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true
	    -Xloggc:logs/gc-worker-%ID%.log -verbose:gc
	    -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1m
	    -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram
	    -XX:+PrintTenuringDistribution -XX:-PrintGCApplicationStoppedTime -XX:-PrintGCApplicationConcurrentTime
	    -XX:+PrintCommandLineFlags -XX:+PrintFlagsFinal

This sets:

* New-gen size to 1000 MB (`-XX:MaxNewSize=1000m`). Almost all the objects running through storm are short-lived -- that's what the First Rule of data stream tuning says -- so almost all your activity is here.
* Apportions that new-gen space to give you 800mb for newly-allocated objects and 100mb for objects that survive the first garbage collection pass.
* Initial perm-gen size of 96m (a bit generous, but Clojure uses a bit more perm-gen than normal Java code would), and a hard cap of 128m (this should not change much after startup, so I want it to die hard if it does).
* Implicit old-gen size of 1500 MB (total heap minus new- and perm-gens) The biggest demand on old-gen space comes from long-lived state objects: for example an LRU counting cache or dedupe'r. A good initial estimate for the old-gen size is the larger of a) twice the old-gen occupancy you observe in a steady-state flow, or b) 1.5 times the new-gen size. The settings above are governed by case (b).
* Total heap of 2500 MB (`-Xmx2500m`): a 1000 MB new-gen, a 100 MB perm-gen, and the implicit 1500 MB old-gen. Don't use gratuitously more heap than you need -- long gc times can cause timeouts and jitter. Heap size larger than 12GB is trouble on AWS, and heap size larger than 32GB is trouble everywhere.
* Tells it to use the "concurrent-mark-and-sweep" collector for long-lived objects, and to only do so when the old-gen becomes crowded.
* Enables that a few mysterious performance options
* Logs GC activity at max verbosity, with log rotation

If you watch your GC logs, in steady-state you should see

* No stop-the-world (STW) gc's -- nothing in the logs about aborting parts of CMS
* old-gen GCs should not last longer than 1 second or happen more often than every 10 minutes
* new-gen GCs should not last longer than 50 ms or happen more often than every 10 seconds
* new-gen GCs should not fill the survivor space
* perm-gen occupancy is constant

Side note: regardless of whether you're tuning your overall flow for latency or throughput, you want to tune the GC for latency (low pause times). Since things like committing a batch can't proceed until the last element is received, local jitter induces global drag.

==== Tempo and Throttling

Max-pending (`TOPOLOGY_MAX_SPOUT_PENDING`) sets the number of tuple trees live in the system at any one time.

Trident-batch-delay (`topology.trident.batch.emit.interval.millis`) sets the maximum pace at which the trident Master Batch Coordinator will issue new seed tuples. It's a cap, not an add-on: if t-b-d is 500ms and the most recent batch was released 486ms, the spout coordinator will wait 14ms before dispensing a new seed tuple. If the next pending entry isn't cleared for 523ms, it will be dispensed immediately. If it took 1400ms, it will also be released immediately -- but no make-up tuples are issued.

Trident-batch-delay is principally useful to prevent congestion, especially around startup. As opposed to a traditional Storm spout, a Trident spout will likely dispatch hundreds of records with each batch. If max-pending is 20, and the spout releases 500 records per batch, the spout will try to cram 10,000 records into its send queue.


==== Machine Sizing


==== 

* System: machines; workers/machine, machine sizing; (zookeeper, kafka sizing)
* Throttling: batch size; kafka-partitions; max pending; trident batch delay; spout delay; timeout
* Congestion: number of ackers; queue sizing (exec send, exec recv, transfer); `zmq.threads`
* Memory: Max heap (Xmx), new gen/survivor size; (queue sizes)
* Ulimit, other ntwk sysctls for concurrency and ntwk; Netty vs ZMQ transport; drpc.worker.threads;
* Other important settings: preferIPv4; `transactional.zookeeper.root` (parent name for transactional state ledger in Zookeeper); `` (java options passed to _your_ worker function), `topology.worker.shared.thread.pool.size`
* Don't touch: `zmq.hwm` (unless you are seeing unreliable network trnsport under bursty load), disruptor wait strategy, worker receive buffer size