Jul 06

URL shortener service: https://url.joor.net (pygmy)

Reading time: 2 – 2 minutes

Lately, I started running my own URL shortener service because of Google URL shortener service is going to shut down. Below there is a short video showing how the service runs and also there is a Google Chrome extension which I created for integrating the service with the browser.

For quick access and reference the URLs are:

Final notes:

  • Base URL is not the shortest one, but for my personal requirements, it’s more than enough.
  • Service is in early stages, especially the extension. Expect errors, bugs, and unavailabilities.
  • Service is open and free for everyone. But remember, the main purpose is my personal use.
  • I know that pygmy has more features than I publish but I don’t need them and I don’t want to maintain those parts of the applications.
  • I appreciate the effort of Amit for so good application.
Jul 05

A history of connectivity in Torrelavit: from 1.200bps using packet radio to 1Gbps on fiber optics

Reading time: 5 – 8 minutes

This is a chronology of my history using the Internet, and no-Internet, connections. I never thought this is going to be possible; currently, my Internet connection is faster than my local network. The best speed test that I’ve got so far is the one that you have in the attached screenshot using an old Dell Studio XPS with Linux Mint. The paradox is that more modern computers get worst performance than this one and all of them synchronize the network card to 1Gbps with the Mikrotik CloudSwitch which acts like a gateway applying NAT rules before the fiber optic bridge.

By the way, that news at home makes me spend some time remembering all the Internet connections I had at home since 1992. During Barcelona Olimpic games I was on Netherland with a holidays family trip in that trip we met a guy who told me how to connect two computers using a ham radio station, this technique is called packet radio. I was 15 and my life changed at that instant. The idea was using a radio station of 145MHz, or 433MHz (there were more frequencies but they are unusual) connect the speaker and mic of the radio station using an audio jack to a TNC (Terminal Node Control), or a Baycom, which are at the end of the day like a modem that converts analog audio signals to digital serial port signals. Using the serial port connected to the computer a terminal application was run for user interfacing with the network. Protocol used for the WAN network was AX.25 a variant of X.25, used in the past on most ATM networks, the packet radio network had not only client stations, also BBS (Bulletin Board Systems), Weather Stations, email, FTP, nodes (repeaters), and gateways to other WAN networks like Fidonet, and Internet. How to get those resources without a searcher like Google, it’s another story.

Exchange data was a nightmare, because communication was at a bandwidth of 1.200bps on a shared media, the air, using a simplex (half-duplex)  communication. In plain words, this means to send a 10KB pictures could take an afternoon and usually, terminals weren’t multi-task so you had the computer busy doing that all the time. The first time that I run a browser was for Gopher service which inspired Web service at some point, I had two use two computers one of them running as TCP/IP to AX.25 gateway and the other with Windows 3.1 running Mosaic and using IP over parallel port (PLIP) for exchanging data with those two computers. Mosaic was the first graphical browser of the story as far as I know. Finally, I remember that I developed a small implementation of TCP/IP stack over AX.25 on an EPROM for a TNC, it was very basic but good enough for mapping ham radio IDs and TCP/IP addresses. BTW, my ham radio ID was EB3EWH.

This part of the story lasted about 5 years, I remember the late 90s when good enough plain old telephone line (POTS) was installed at home and a super modern 9600bps telephonic modem which evolved very fast to different speeds up to 56K was the only option for connecting to the Internet. We used Infovía and Infovía plus a Telefonica service for getting data network instead of regular calls to the ISPs, really painful part of the story. Slow speeds also and really expensive services for what we get. Maybe the most fun thing about this part of the history was when someone at home pick up the phone and data connection was interrupted. Other roughly 5 years with this connectivity pattern.

In the early 2000s, I installed an ISDN line at home with two 64kbps at home, on those was permanently connected to the Internet and the other one was available for voice calls. ISDN was a really interesting and very stable technology I’m out of the market currently but I think it’s still possible to buy links using so old technology.  This part of the history was shorter than the previous ones and the funniest story was a thunderstorm and lighting day when the TR1 exploited in front of me I scared a lot.

At the end of 2002 a 256Kbps ADSL was installed on top of a new POTS line installed again at home. It was like a dream, finally, I was navigating with broad bandwidth. ADSL speed up to 2Mbps it was impossible to improve the speed to higher speeds than that because I’m too far from any ADSL distribution point. I remember in 2012 I was paying for 6Mbps internet connection which was the smallest possible to hire and my maximum speed was 1.9Mbps measured with the proper equipment for quality lines test. 

In the end of 2013 I installed a WiFi link with a guy who re-sell fiber optics connection in Sant Sadurní d’Anoia using a link to the Ordal which is a mountain that I can view from home, about 10km link using 5GHz frequency. Speed was 6Mbps/300kbps. This link was active until two days ago, but since August of 2014 the main Internet connection was using another WiFi link with a company called XTA (Xarxa de Telecomunicacions Alternatives), a.k.a. WifiPenedes which is part of the Guifi.net project. This is the same company which installed fiber optics at home the day before yesterday at home. The WiFi link that I had with WifiPenedes was 20Mbps/1Mbps and currently, with the fiber optics I have 1Gbps/300Mbps and there is no backup link with any other technology. BTW, I have a 4G subscription which can be used for emergencies.

Of course will be nice to go deeper with each of those points, finding anecdotes I lived with those 26 years of history connecting to wide area networks (WAN) maybe one day I’ll find the priority for sharing so amazing moments and people that I met thanks to those networks. If I have to admit that networking changed my life and I have had access to a broad knowledge thanks to that. Thank everyone and every company which makes this possible, it has been a pleasure to enjoy this fantastic processes. I finish accepting the challenge to improve my LAN for getting the best from my new Internet connection.

May 02

HTTPie – command line HTTP client

Reading time: 1 – 2 minutes

I imagine you are used to using curl for many command line scripts, tests, and much more things. I did the same but some weeks ago I discovered HTTPie which is the best substitute that I’ve ever found for curl. Of course, it’s also available for a lot of Linux distributions, Windows, and Mac. But I used it with docker which is much more transparent for the operative system and easy to update. To be more precise I use next alias trick for using this tool:

alias http='sudo docker run -it --rm --net=host clue/httpie'

Official website: httpie.org

Let me paste some highlights about HTTPie:

  • Sensible defaults
  • Expressive and intuitive command syntax
  • Colorized and formatted terminal output
  • Built-in JSON support
  • Persistent sessions
  • Forms and file uploads
  • HTTPS, proxies, and authentication support
  • Support for arbitrary request data and headers
  • Wget-like downloads
  • Extensions
  • Linux, macOS, and Windows support

From the tool webpage a nice comparison about how HTTPie looks like versus curl.

Aug 29

Internet fail over connection with Mikrotik

Reading time: 2 – 4 minutes

Based on my home configuration I’m going to describe how to set up a Mikrotik to manage fail over Internet connection. Next schema describes a Mikrotik gateway with two internet connections (GUIFI and SS). Assuming GUIFI as a default Internet connection periodic checks on Google DNSes (8.8.8.8 and 8.8.4.4) will allow to know when it’s good to change the default route.
internet-failover

 

If you have some Linux routing background it will be easier to understand the configuration. Main idea is use policy routing tables and mark packets to use one table or other. In my case I have two routing tables GUIFI and SS, and of course, the default gateway of each of those tables is the gateway indicated in the schema.

First step is take care about the routes for hosts to monitor; using GUIFI connection will be checking connectivity to 8.8.8.8 and using SS the monitored host will be 8.8.4.4.

/ip route
add dst-address=8.8.8.8 gateway=172.29.2.1 scope=10
add dst-address=8.8.4.4 gateway=172.29.1.1 scope=10

Second step is configure two routing tables, those routes will check Internet hosts availability. Routes are resolved recursively (more info), and will be active only if any host is pingable.

# routing table for GUIFI
/ip route
add distance=1 gateway=8.8.8.8 routing-mark=GUIFI check-gateway=ping
add distance=2 gateway=8.8.4.4 routing-mark=GUIFI check-gateway=ping
# routing table for SS
/ip route
add distance=1 gateway=8.8.4.4 routing-mark=SS check-gateway=ping
add distance=2 gateway=8.8.8.8 routing-mark=SS check-gateway=ping

Routing table looks like that:

routing-table

Next step will be create marking rules in the firewall:

# next rule mark all LAN traffic (10.2.0.0/26) before routing
# it'll be processed by routing table GUIFI
# it makes GUIFI the default Internet connection 
/ip firewall mangle
add action=mark-routing chain=prerouting comment="All LAN traffic" dst-address=\
    !10.0.0.0/8 new-routing-mark=GUIFI passthrough=no src-address=10.2.0.0/26

If any specific host, service or whatever want to use specific routing table, then you can create new rules with proper mark to redirect the traffic to that Internet connection. But if that path fails other Internet connection will be used.

In my case I have a more complicated scenario, internal VoIP server uses a IP Telephony service only available through GUIFI connection. The way to force that is forbidding traffic to SS connection. A simple firewall rule will help to do that:

# X.X.X.X = IP address of the IP telephony provider
/ip firewall filter
add action=reject chain=forward dst-address=X.X.X.X in-interface=\
    bridge-lan out-interface=SS-eth2

I hope previous simple notes are useful for you, they are inspired by Advanced Routing Failover without Scripting.

Aug 12

Secure download URLs with expiration time

Reading time: 4 – 6 minutes

Requirements

Imagine a HTTP server with those restrictions:

  • only specific files can be downloaded
  • with a limited time (expiration date)
  • an ID allows to trace who download files
  • with minimal maintenance and dependencies (no databases, or things like that)

the base of the solution that I designed is the URL format:

http://URL_HOST/<signature>/<customer_id>/<expire_date>/<path_n_file>
  • signature: is calculated with the next formula, given a “seed”
    • seed = “This is just a random text.”
    • str = customer_id + expire_date + path_n_file
    • signature = encode_base64( hmac_sha1( seed, str))
  • customer_id: just an arbitrary identifier when you want to distinguish who use the URL
  • expire_date: when the generated URL stops working
  • path_n_file: relative path in your private repository and the file to share

Understanding the ideas explained before I think it’s enough to understand what is the goal of the solution. I developed the solution using NGINX and LUA. But the NGINX version used is not the default version is a very patched version called Openresty. This version is specially famous because some important Chinese webs works with that, for instance, Taobao.com

Expiration URL solution Architecture schema

In the above schema there is a master who wants to share a file which is in the internal private repository, but the file has a time restriction and the URL is only for that customer. Then using the command line admin creates a unique URL with desired constrains (expiration date, customer to share and file to share). Next step is send the URL to the customer’s user. When the URL is requested NGINX server evaluates the URL and returns desired file only if the user has a valid URL. It means the URL is not expired, the file already exists, the customer identification is valid and the signature is not modified.

NGINX Configuration

server {
 server_name downloads.local;

 location ~ ^/(?<signature>[^/]+)/(?<customer_id>[^/]+)/(?<expire_date>[^/]+)/(?<path_n_file>.*)$ {
 content_by_lua_file "lua/get_file.lua";
 }

 location / {
 return 403;
 }
}

This is the server part of the NGINX configuration file, the rest of the file can as you want. Understanding this file is really simple, because the “server_name” works as always. Then only locations command are relevant. First “location” is just a regular expression which identifies the relevant variables of the URL and passes them to the LUA script. All other URLs that doesn’t match with the URI pattern fall in path “/” and the response is always “Forbiden” (HTTP 403 code). Then magics happen all in LUA code.

LUA scripts

There are some LUA files required:

  • create_secure_link.lua: creates secure URLs
  • get_file.lua: evaluates URLs and serves content of the required file
  • lib.lua: module developed to reuse code between other lua files
  • sha1.lua: SHA-1 secure hash computation, and HMAC-SHA1 signature computation in Lua (get from https://github.com/kikito/sha.lua)

It’s required to configure “lib.lua” file, at the beginning of the file are three variables to set up:

lib.secret = "This is just a long string to set a seed"
lib.base_url = "http://downloads.local/"
lib.base_dir = "/tmp/downloads/"

Create secure URLs is really simple, take look of the command parameters:

$ ./create_secure_link.lua 

 ./create_secure_link.lua <customer_id> <expiration_date> <relative_path/filename>

Create URLs with expiration date.

 customer_id: any string identifying the customer who wants the URL
 expiration_date: when URL has to expire, format: YYYY-MM-DDTHH:MM
 relative_path/filename: relative path to file to transfer, base path is: /tmp/downloads/

Run example:

$ mkdir -p /tmp/downloads/dir1
$ echo hello > /tmp/downloads/dir1/example1.txt
$ ./create_secure_link.lua acme 2015-08-15T20:30 dir1/example1.txt
http://downloads.local/YjZhNDAzZDY0/acme/2015-08-15T20:30/dir1/example1.txt
$ date
Wed Aug 12 20:27:14 CEST 2015
$ curl http://downloads.local:55080/YjZhNDAzZDY0/acme/2015-08-15T20:30/dir1/example1.txt
hello
$ date
Wed Aug 12 20:31:40 CEST 2015
$ curl http://downloads.local:55080/YjZhNDAzZDY0/acme/2015-08-15T20:30/dir1/example1.txt
Link expired

Little video demostration

Resources

Disclaimer and gratefulness

 

 

May 08

Free dynamic DNS service

Reading time: < 1 minute

A long time ago there were several free dynamic DNS services but nowadays it’s difficult to find one of them. And when you find the service usually you have some important restrictions like: number of updates per day or only few subdomains per account. But in the end I found a good free service of that, it’s part of the project guifi.net and is called: Qui; you only need a guifi.net account to use the service and it’s really simple and clear. From my part the compatibility with “ddclient” and the “mikrotik” script are really useful and I want tu highlight this functionality.

Jun 30

Enabling linux kernel to open LOTS of concurrent connections

Reading time: < 1 minute

Just a small recipe about how to enable linux kernel to open tons of concurrent connections. Really simple and useful post entry.

echo “10152 65535″ > /proc/sys/net/ipv4/ip_local_port_range
sysctl -w fs.file-max=128000
sysctl -w net.ipv4.tcp_keepalive_time=300
sysctl -w net.core.somaxconn=250000
sysctl -w net.ipv4.tcp_max_syn_backlog=2500
sysctl -w net.core.netdev_max_backlog=2500
ulimit -n 10240
Jan 29

Routerboard CRS125-24G-1S-2HnD-IN (Mikrotik) Cloud Switch

Reading time: 1 – 2 minutes

I bought this product a few weeks ago and finally I can enjoy it at home. With this product you have a firewall, gateway, switch and wireless box with:

  • 25x Gigabit Ethernet ports
  • 1x Fiber channel
  • 3G, 4G or any optional USB modem
  • With RouterOS inside you can manage: gateway, firewall, VPN and ad-hoc switching and routing configurations
  • 1000mW high power 2.4GHz 11n wireless AP
CRS125-24G-1S-2HnD-IN

CRS125-24G-1S-2HnD-IN

The official product page is here where you can find brochure in PDF and other useful information.

If you are looking for a powerful product for your SOHO network this is the solution as I like to say ‘this is one of the best communications servers’. It will be very difficult to find some feature or functionality that you can not get from this product. The product is robust and stable with the flexibility of RouterOS.

Oct 11

Some recommendations about RESTful API design

Reading time: 4 – 6 minutes

I want to recommend to you to watch the YouTube video called RESTful API design of Brian Mulloy. In this post I make an small abstract of the most important ideas of the video, of course from my point of view:

  • Use concrete plural nouns when you are defining resources.
  • Resource URL has to be focused in access collection of elements and specific element. Example:
    • /clients – get all clients
    • /clients/23 – get the client with ID 23
  • Map HTTP methods to maintein elements (CRUD):
    • POST – CREATE
    • GET – READ
    • PUT – UPDATE
    • DELETE – DELETE
  • Workaround, if your REST client doesn’t support HTTP methods, use a parameter called ‘method’ could be a good idea. For example, when you have to use a method HTTP PUT it could be changed by method HTTP GET and the parameter ‘method=put’ in the URL.
  • Sweep complexity behind the ‘?’. Use URL parameters to filter or put some optional information to your request.
  • How to manage errors:
    • Use HTTP response codes to refer error codes. You can find a list of HTTP response codes  in Wikipedia.
    • JSON response example can be like this:
      { 'message':'problem description', 'more_info':'http://api.domain.tld/errors/12345' }
    • Workaround, if REST client doesn’t know how to capture HTTP error codes and raise up an error losing the control of the client, you can use HTTP response code 200 and put ‘response_code’ field in JSON response object. It’s a good idea use this feature as optional across URL parameter ‘supress_response_code=true’.
  • Versioning the API. Use a literal ‘v’ followed by an integer number before the resource reference in the URL. It could be the most simple and powerful solution in this case. Example: /v1/clients/
  • The selection of what information will be returned in the response can be defined in the URL parameters, like in this example: /clients/23?fields=name,address,city
  • Pagination of the response. Use the parameters ‘limit’ and ‘offset’, keep simple. Example: ?limit=10&offset=0
  • Format of the answer, in this case I’m not completely agree with Brian. I prefer to use HTTP header ‘Accept’ than his proposal. Anyway both ideas are:
    • Use HTTP header ‘Accept’ with proper format request in the answer, for example, ‘Accept: application/json’ when you want a JSON response.
    • or, use extension ‘.json’ in URL request to get the response in JSON format.
  • Use Javascript format for date and time information, when you are formatting JSON objects.
  • Sometimes APIs need to share actions. Then we can’t define an action with a noun, in this case use verb. Is common to need actions like: convert, translate, calculate, etc.
  • Searching, there are two cases:
    • Search inside a resource, in this case use parameters to apply filters.
    • Search across multiple resource, here is useful to create the resource ‘search’.
  • Count elements inside a resource, simply add ‘/count’ after the resource. Example: /clients/count
  • As far as you can use a single base URL for all API resources, something like this: ‘http://api.domain.tld’.
  • Authentication, simply use OAuth 2.0
  • To keep your API KISS usually it’s a good idea develop SDK in several languages, where you can put more high level features than in API.
  • Inside an application each resource has its own API but it’s not a good idea publish it to the world, maybe use a virtual API in a layer above it’s more secure and powerful.