To avoid that in future before filling the partition, it’s a good idea to install and run periodically: purge-old-kernels. Installation and example of use are:
# installationapt-get install bikeshed# keep three old kernels:purge-old-kernels --keep 3# if you want to put that in the crontab use that commandpurge-old-kernels --keep 3 -qy
If you’re a Grub user don’t forget to run:
update-grub2
Personally I have a nightmare with that problem and Ubuntu, especially with version 12.04 which is installed in a lot of servers that I manage. I repeated the previous process a lot of times and in the end, I decided to document it because I always have to go to Google and find the proper steps to solve that problem.
After using VMWare ESXi for a long time as a Hypervisor for my virtual servers I decided to stop paying OVH for that service and I migrated my virtual machines to VPS servers on OVH. At the end of the day only two VPS with a cost of 3€/month are enough and I can stop a 50€/month dedicated server.
The biggest challenge that I had to solve was migrate mail server to a new server. So far today I was using pfSense a firewall for my virtual servers. They were in a virtual network; pfSense anti-spam services and mail forwarding were enough to receive “cleaned” mail in my private mail server with Postfix and Dovecot.
New configuration is just a cheap VPS (1xCPU+2GB RAM+10GB SSD) with Ubuntu 16.04 and also with Postfix and Dovecot. But I decided to rent the anti-spam, anti-malware and anti-virus service to MX Guarddog. I discovered that service just surfing on the big G. Only 0.25 cents per account per month it’s a very good price and it does all the things that I need and much more. Configuration is really simple if you know what you are doing. They have a very good and simple control panel to manage the service. This is the perfect service to get what I need.
In the control panel you can do all that you need, manage mail accounts and domains. View quarantined mails and all required configurations and tests to validate everything is ready and also maintain white and black lists. We’ll see during next days if the service gets the quality that I expect, I hope I have found a very good and cheap resource.
Sniffing and inspect complex protocols on “tcpdump” is usually painful. Of course, “tcpflow” is a very useful tool but is not always enough to sniff in a console. Wireshark is always a better option when it’s time to debug and troubleshooting communication problems.
But it’s not always easy to plug a Network TAP where you want to sniff. If at that point we have a Linux box with “ssh” and “tcpdump”. An interesting option is stream sniffed traffic to another box with Wireshark and dissect packet octets in their layers, fields, etc.
When Wireshark box is based on Windows you need “plink.exe“, and you can do thinks like that:
Two months ago I went to get my “Mac Book Air mid2011 version” and found that:
The batteries had exploded! It is curious bacause I have laptops saved for many years, one would say it has almost 20 years. Obviously the battery lasts very little but has never exploited. It is incredible that a brand that cares the quality of its product as Apple and a product that was the best in its class 5 years ago; today without more than being on a shelf it has exploded from one day to the other.
In Apple store didn’t want to know about the problem because it is out of guarantee; luckly it wasn’t my daily laptop and after buying a new battery in ebay I have changed the battery for les than 50€ and the laptop keeps running.
I don’t know if anybody else suffered that experience but IMHO Apple has failed and I’m very disappointed with their reaction with my issue with the product. I know it’s not on guarantee but I paid close to 1.700€ on a Laptop less than 5 years ago and I don’t expect that. Clearly this is a manufacturing problem with the battery. I have to recognize once again that Apple has very good quality products, or not, but day after day their customer support is being worst.
Just a final note my actual laptop is a Toshiba, I’m not proud of it but it works quite good so far today is for far more powerful than current MacBook Air with the same weight amd I don’t have to carry a lot of connectors and cables because everything is embedded, included the 4G modem.
It seems a jog but it’s true, after buying my Toshiba Portégé Z30-A-180 PT243 I was so proud about the performance and laptop features. By default it was running a Windows 7 and after some months of using mouse pointer started moving drawing a diagonal in the screen there wasn’t a stela just a diagonal movement and during that automatic movement there were no way to get mouse control. New Toshiba laptops has a touchpad and a trackpoint a none of them was responding while that happened. Because that only happens time to time I didn’t pay attention to the problem.
Last Christmas holidays I updated the laptop to Windows 10, and I was very happy to see how 99.9% of applications and configurations was maintained and running perfectly. But after some weeks mouse pointer movements return to my life, some times very often and some times less usual. But one afternoon I was totally desperate with that issue and I decided to look it up on Google. I found a thread on Toshiba support forum where more people was talking about the same issue. Proposed solutions are not perfect but helpful for me, they talk about a static electricity problem that affects trackpoint and the best option is disable it to forget the problem. Luckly I don’t use trackpoint because for me touchpad is more confortable and disable trackpoint is good enough solution in my case.
So if you have automatic mouse movements in Toshiba Portégé Z30 disable the trackpoint, don’t forget that Toshiba refers to that device as a Accupoint. Below you have a capture of the instructions to do that:
I hope this blog entry has been so helpful as it has been for me.
Main target of this post is describe how to organize flash partitions and how to modify default OpenWRT boot sequence to support a flexible and powerful rescue mode for Raspberry PI based projects. Just to clarify the explaination. When OpenWRT is build on a flash card for Raspberry, there are only two partitions.
The first one is vFat partition with kernel, firmware and other configuration files; the second one is a ext4 partition with root filesystem. Boot sequence loads the kernel and then mount root partition and run the init script. If ext4 filesystem is corrupted or could not be mounted boot sequence is stoped and there is no solution without extracting the flash card.
Features
In this blog entry I’m going to describe a partition table and boot sequence strategy to avoid this kind of problems. Of course, there are other solutions to get similar results but I think this one is simple and powerful at the same time.
Summarizing features of this solution:
reduce risk when using intensive writing app
reduce damage risk on flash memories
fail-safe mode pressing a button
support application upgrades using opkg packages
support operative system upgrades using opkg packages
This solution proposal assume:
wear leveling protection solved by flash card
button connected to GPIO pins
The idea
Raspberry PI requires a vfat partition as its first flash partition where there are several required files for booting process, this is a bootloader substitution. For example, in that partition there are files like: start*.elf and bootcode.bin which are the GPU firmware and bootloaders. Another key file is kernel.img; this is the kernel used for booting. Bootloader parameters for kernel booting are in a file called cmdline.txt and firmware parameters are set in config.txt.
At this point the most important think to take into account is kernel.img file and cmdline parameters. Because kernel is loaded and executed by default with cmdline parameters set. When kernel boot process finishes root filesystem and init process sequence will be figured out from cmdline parameters.
At this point take a look on proposed partition table could be useful: (spaces are just as a reference, use what you need)
p1 - vfat (~50MB)p2 - ext4 - operative system base (read-only) (~150MB)p3 - ext4 - operative system (read-write) (~250MB)p4 - logical partition p4.1 - ext4 - your_application files (usually read-only) p4.2 - ext4 - your_application data (usually read-write)
Fail-safe boot process key is partition p2 where a minimal OpenWRT installation with a modified init sequence is found. Main idea here is detect if a GPIO shortcut is done, usually this is done just pressing a physical button and you can interact with the user emitting some beep, for example, you can tell the user when you are waiting for button press using a beep and then emit two beeps when button press is detected or nothing if no button is pressed in 3 seconds. Finally the idea is detect if you need a regular boot or a fail-safe boot.
My suggestion for minimal OpenWRT is a small footprint installation of OpenWRT without kernel modules, just the monolitic kernel loaded. Then reduce init sequence to the minimum and add fail-safe logic (GPIO button capture); if button is pressed stop boot sequence and give a shell to the user. Regular way will be invoke init file of the rootfs (p3 in the partition table).
I think the idea is simple and the complexity is reduced in two parts both of them are the init file. To be more precise the p2 partition table has its own init file and p3 the other one. p2 init file load the minimum hardware to control button and give rescue environment when it’s needed. And p3 init file mounts read-write partition and the regular filesystem with regular boot processes and all kind of stuff that you need.
Final notes
I know this is not a very practical post, but my intention is only share some ideas that I have in mind. I spend most of my time designing architectures and I think this is a very powerful architecture of a boot sequence for some professional projects based on Raspberry PI and OpenWRT.
The best way to do what I describe in this post is putting p2 in a initrd file which is referenced in kernel parameters. Because then all read-only system is a RAM partition and rootfs init file has the PID 1 dropping dual-init file complexity. But I decided to modify this part because in the past I had some problems creating initrd files specially when required space for that partition is bigger than RAM. Anyway it’s important to take in account that initrd files has the same purpose as the proposed p2 partition.
Lately I found some useful web applications that publish a terminal application. This is very useful when you are traveling or you have a remote server which you want to maintain or access from anywhere. Also another interesting use of this kind of applications is as a terminal for embedded devices.
I tried to use them as my default applications but all of them have the same problem: keyboard shortcuts conflict with the browser. I’m very used to use a lot of shortcuts to manage my terminal application and remote shell and this is a problem because most of the shortcuts are redefined by your browser. May be it’s possible to disable browser shortcuts when you are using this kind of web applications but I didn’t find how.
I hope this small list is as much useful for you as it is for me:
signature: is calculated with the next formula, given a “seed”
seed = “This is just a random text.”
str = customer_id + expire_date + path_n_file
signature = encode_base64( hmac_sha1( seed, str))
customer_id: just an arbitrary identifier when you want to distinguish who use the URL
expire_date: when the generated URL stops working
path_n_file: relative path in your private repository and the file to share
Understanding the ideas explained before I think it’s enough to understand what is the goal of the solution. I developed the solution using NGINX and LUA. But the NGINX version used is not the default version is a very patched version called Openresty. This version is specially famous because some important Chinese webs works with that, for instance, Taobao.com
In the above schema there is a master who wants to share a file which is in the internal private repository, but the file has a time restriction and the URL is only for that customer. Then using the command line admin creates a unique URL with desired constrains (expiration date, customer to share and file to share). Next step is send the URL to the customer’s user. When the URL is requested NGINX server evaluates the URL and returns desired file only if the user has a valid URL. It means the URL is not expired, the file already exists, the customer identification is valid and the signature is not modified.
This is the server part of the NGINX configuration file, the rest of the file can as you want. Understanding this file is really simple, because the “server_name” works as always. Then only locations command are relevant. First “location” is just a regular expression which identifies the relevant variables of the URL and passes them to the LUA script. All other URLs that doesn’t match with the URI pattern fall in path “/” and the response is always “Forbiden” (HTTP 403 code). Then magics happen all in LUA code.
LUA scripts
There are some LUA files required:
create_secure_link.lua: creates secure URLs
get_file.lua: evaluates URLs and serves content of the required file
lib.lua: module developed to reuse code between other lua files
sha1.lua: SHA-1 secure hash computation, and HMAC-SHA1 signature computation in Lua (get from https://github.com/kikito/sha.lua)
It’s required to configure “lib.lua” file, at the beginning of the file are three variables to set up:
lib.secret = "This is just a long string to set a seed"lib.base_url = "http://downloads.local/"lib.base_dir = "/tmp/downloads/"
Create secure URLs is really simple, take look of the command parameters:
$ ./create_secure_link.lua ./create_secure_link.lua <customer_id><expiration_date><relative_path/filename>Create URLs with expiration date. customer_id: any string identifying the customer who wants the URL expiration_date: when URL has to expire, format: YYYY-MM-DDTHH:MM relative_path/filename: relative path to file to transfer, base path is: /tmp/downloads/
Run example:
$ mkdir -p /tmp/downloads/dir1$ echo hello > /tmp/downloads/dir1/example1.txt$ ./create_secure_link.lua acme 2015-08-15T20:30 dir1/example1.txthttp://downloads.local/YjZhNDAzZDY0/acme/2015-08-15T20:30/dir1/example1.txt$ dateWed Aug 12 20:27:14 CEST 2015$ curl http://downloads.local:55080/YjZhNDAzZDY0/acme/2015-08-15T20:30/dir1/example1.txthello$ dateWed Aug 12 20:31:40 CEST 2015$ curl http://downloads.local:55080/YjZhNDAzZDY0/acme/2015-08-15T20:30/dir1/example1.txtLink expired
Aquest cap de setmana vaig tornar a donar la meva conferència sobre “SmartHome” en aquesta ocasió vaig introduïr la novetat del sistema de feedback basat en “Tasker + Auto-notification”, a més de l’execusió de tasques basades en “crontab”. També he inclòs un petit avanç sobre el nou projecte en el que estic treballant per tal d’integrar les dades del descalcificador dins del OpenHAB usant una webcam i OpenCV per processar les imatges optingudes.
Agraïr al Xavi, Gerardo i la Laura per haver-me donat aquesta oportunitat. Tant el SCG15 com el SAX2015 són events molt familiars a més l’entorn és espectacular. Tot un descobriment del que malgrat tenir referències no havia pogut disfrutar en primera persona.
Reading time: < 1 minute
A long time ago there were several free dynamic DNS services but nowadays it's difficult to find one of them. And when you find the service usually you have some important restrictions like: number of updates per day or only few subdomains per account. But in the end I found a good free service of that, it's part of the project guifi.net and is called: Qui; you only need a guifi.net account to use the service and it’s really simple and clear. From my part the compatibility with “ddclient” and the “mikrotik” script are really useful and I want tu highlight this functionality.