Category Archives: Networking

Asterisk – Seamless dialing of remote extension through DTMF

Problem Description:

There are two offices.  Office A runs Asterisk / FreePBX while office B runs a closed system with auto attendant.

The guys at office A would like to be able to dial office B extensions as if they were local.

 

Solution Overview:

Program the office A extensions in this way:

  1. Local extension picks up
  2. Remote office number is called
  3. When the remote office picks up
  4. DTMF key presses are sent to select the right extension
  5. Call is connected

Solution Details:

First I attempted to program this into freePBX through the GUI, but I wasn’t having any luck because the default macros were not letting me craft the dial command in such a way that it sends the key presses after the call is placed.   Although it would have been nice to have everything in the GUI, the FreePBX GUI method seems to be a dead end.  I ended up relying on good old /etc/asterisk/extensions_custom.conf configuration file, and I just created my own extensions there.

[ext-local]
exten => 102,1,Dial(SIP/v-outbound/4031112222,30,rD(ww11))
exten => 103,1,Dial(SIP/v-outbound/4032223333,30,rD(ww12))

[ext-local] sets the right context so that these extensions are picked up as if they were local.  You could also put these into other contexts like [ivr-1] etc.

D tells the Dial command to send DTMF button presses after the remote end picks up

w tells the Dial command to wait 0.5 seconds

 

Speed and Latency from Calgary to the Internet

I’ve always been fascinated by the relationship between internet speed and latency.   I know that for a given a specific latency you can calculate the maximum speed of an internet connection:

TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput

It’s described in more detail here

But will that hold up in real world tests?  I decided to measure it.  Here are the results:

speed

latency

To do this I did the following:

  •  Created a script that tested each of the ~3000 speedtest.net servers by wget’ing a large test image from each.   Ran this off of a 1 Gbs connection, on Centos 7 server with default window size.
  • Summarized the data by country
  • Uploaded to openheatmap.com to make it pretty

Here are some interesting things that came out of this little experiment:

  • Countries in the same geographic region that have slower throughput are not running at their full potential.  Examples include: Guatemala, Brazil, Libya, Latvia, Lithuania, Iceland, Portugal, etc.    Take these results with a grain of salt, because some of these suffer from small sample size.  For example, I had to remove Japan from the data set because the only server Japan  was a unusually slow one.
  • Vast majority of the countries do run at their theoretical max throughput.   That means that most of the time, as long as you know your TCP window size, you can easily calculate the throughput with confidence.
  • You can roughly deduce route paths by looking at the map.  Take Iceland for example.  Even though it’s closer to Canada than Great Britain, the route clearly traverses through Great Britain, Ireland or Norway before heading to Iceland.

Unexpected results:

  • I got an impossible latency for Pakistan.  (47 milliseconds)  With that latency there is no way to even get to Europe, much less China.  The closest country with that kind of latency is Mexico.  I eliminated that from the data set as an anomaly.
  • I got impossible latency for Morocco.   (64 milliseconds).  Again, that’s way too quick.  I’m guessing both Morocco and Pakistan have been  mis-classified and are actually somewhere in the USA.

Remaining questions:

  • If you noticed, I haven’t said what my Centos 7 default TCP window size was.   That’s because I’m not really sure my self.  Working back from the real-life results, I’m sure the window size is approx 1,000 KB.  However, that doesn’t seem to match these parameters on my system: net.core.wmem_max = 212992 and net.ipv4.tcp_wmem = 4096 16384 3915616 … there must be some multiplication division factor involved.  Or maybe I’m looking at the wrong parameter altogether?

Quick and easy iptables based proxy

Today was a busy day dealing with power outage that affected 2100 businesses in downtown Calgary. Of course, couple of my clients were in the zone that went dark. I offered them to run their key infrastructure from my place for couple of days. Everything went great, except I have only 1 IP address on my connection. That’s not good when both clients want to come in on port 443. What to do?

Call up my ISP and order another IP? Nope: Takes too long, too expensive, I just need this temporarily. Also, ISP might mess it up and take me offline for a while.

Get VM with IPv4 IP and proxy the traffic over? Yes, but why go with something heavy handed like nginx?

I prefer this elegant solution brought to you by iptables:


# echo 1 >| /proc/sys/net/ipv4/ip_forward
# iptables -t nat -A PREROUTING -p tcp -d $IP_OF_VM --dport 443 -j DNAT --to $IP_WHERE_IM_FORWARDING_TO:8443
# iptables -t nat -A POSTROUTING -j MASQUERADE

[Solved] Linux PPTP client NATed behind pfsense firewall

When migrating my PPTP client configuration from an older Linux server to a new one, I could not get a PPTP tunnel up and running on the new server.   I kept getting this error flow:


using channel 15
Using interface ppp0
Connect: ppp0 <--> /dev/pts/1
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xxxxx6a93> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xxxxx6a93> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xxxxx6a93> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xxxxx6a93> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xxxxx6a93> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xxxxx6a93> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0xxxxx6a93> <pcomp> <accomp>]
Script pptp vpn.xxxxxxxx.com --nolaunchpppd finished (pid 23704), status = 0x0
Modem hangup

So I was sending, but getting nothing back.

I tripple checked my configuration, and tweaked a few settings.  No luck.  Then I stumbled on an article that talked about the challenges of PPTP behind NAT devices.    I already knew about the common issue of not being able to dial out with more than one client session to a remote PPTP server.  For that reason I was careful not to have  more than one open at the same time,  but I thought I’d dig a bit deeper to see if NAT was the culprit.

Long story short, I noticed that pfsense -> diagnostics -> pftop was showing a GRE state from old server to the destination VPN server.  It showed age of 3+ hours (forgot the exact number) even though I was sure that the PPTP session on the old server was shut down.   I reset the firewall state on pfsense, and it started to work immediately.

The moral of the story is that pfsense likes to keep the GRE state open for hours after it’s been disconnected.   That is a problem.   Packets go out, but they are NATed to the wrong server when they come back.

Version details:

Pfsense: 2.1.4-RELEASE (i386)
PPTP: 1.7.2
Linux: Ubuntu 14.04.1 LTS

IPv6 for the impatient

I always wanted to get my feet wet with IPv6.   The problem is that my ISP doesn’t support it.   Today I found out that I don’t need to wait until they get their act together, I can get onto IPv6 imediatelly by using a tunnel from Hurricane Electric.

  • It’s free
  • You get /48 prefix of publicly routed IPv6 IPs.  (1208925819614629174706176 addressees)  I still don’t know what I will do with that many 🙂
  • Can dual stack.  IPv4 and IPv6 side by side on a single router.  You don’t need to shut down or disrupt IPv4.    You do not need to quit IPv4 cold turkey.  In fact, I had only two machines on my network dual stacked, happily coexisting with their IPv4 counterparts.
  • Not that hard to setup if you have a router that plays nice (most do)

I got it up and running in under an hour.   It was fun.  When I switched to purely IPv6 mode, it reminded me of the 90’s when every site that actually worked was  a cause for celebration.

To be honest, after a day or so, I actually ended up turning it off.  The trouble was that even though I was running dual stack, everything liked to default to IPv6 first and IPv4 second.   That’s good in theory, but in practice, I feel IPv6 is just not ready if you want 100% smooth experience.

I’ll try again in a year or so since I can see IPv6 adoption is exploding.

 

Pingb: Bandwidth Measuring

Ever needed to get an estimate of a link’s bandwidth and all you have is shell access to one of the end points?

Normally you would need access to both endpoints and run something like iperf across the link.   That’s the proper way, but it takes a lot of time to setup (poke holes through firewalls etc).   If you don’t want to go through that hassle and just need a quick estimate, you can use pingb.

Pingb estimates the bandwidth by measuring the difference between ICMP echo requests of different sizes.