Jan 21

Using Ansible like library programming in Python

Reading time: 2 – 4 minutes

Ansible is a very powerful tool. Using playbooks, something like a cookbook, is very easy to automate maintenance tasks of systems. I used Puppet and other tools like that but IMHO Ansible is the best one.

In some cases you need to manage dynamic systems and take into advantage of Ansible like a Python library is a very good complement for your scripts. This is my last requirement and because of that I decided to share some simple Python snippets that help you to understand how to use Ansible as a Python library.

Firstly an example about how to call an Ansible module with just one host in the inventory (test_modules.py):

#!/usr/bin/python 
import ansible.runner
import ansible.playbook
import ansible.inventory
from ansible import callbacks
from ansible import utils
import json

# the fastest way to set up the inventory

# hosts list
hosts = ["10.11.12.66"]
# set up the inventory, if no group is defined then 'all' group is used by default
example_inventory = ansible.inventory.Inventory(hosts)

pm = ansible.runner.Runner(
    module_name = 'command',
    module_args = 'uname -a',
    timeout = 5,
    inventory = example_inventory,
    subset = 'all' # name of the hosts group 
    )

out = pm.run()

print json.dumps(out, sort_keys=True, indent=4, separators=(',', ': '))

As a second example, we’re going to use a simple Ansible Playbook with that code (test.yml):

- hosts: sample_group_name
  tasks:
    - name: just an uname
      command: uname -a

The Python code which uses that playbook is (test_playbook.py):

#!/usr/bin/python 
import ansible.runner
import ansible.playbook
import ansible.inventory
from ansible import callbacks
from ansible import utils
import json

### setting up the inventory

## first of all, set up a host (or more)
example_host = ansible.inventory.host.Host(
    name = '10.11.12.66',
    port = 22
    )
# with its variables to modify the playbook
example_host.set_variable( 'var', 'foo')

## secondly set up the group where the host(s) has to be added
example_group = ansible.inventory.group.Group(
    name = 'sample_group_name'
    )
example_group.add_host(example_host)

## the last step is set up the invetory itself
example_inventory = ansible.inventory.Inventory()
example_inventory.add_group(example_group)
example_inventory.subset('sample_group_name')

# setting callbacks
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=utils.VERBOSITY)

# creating the playbook instance to run, based on "test.yml" file
pb = ansible.playbook.PlayBook(
    playbook = "test.yml",
    stats = stats,
    callbacks = playbook_cb,
    runner_callbacks = runner_cb,
    inventory = example_inventory,
    check=True
    )

# running the playbook
pr = pb.run()  

# print the summary of results for each host
print json.dumps(pr, sort_keys=True, indent=4, separators=(',', ': '))

If you want to download example files you can go to my github account: github.com/oriolrius/programming-ansible-basics

I hope it was useful for you.

Dec 01

Short MIIMETIQ definition

Reading time: 2 – 3 minutes

M2MCF and MIIMETIQ

Last months in M2M Cloud Factory we have been working on MIIMETIQ. Last weeks I’ve been thinking about how to define MIIMETIQ shortly and this is my definition, please tell if you can understand something. Of course, you have to know we’re focused in Internet of Things and M2M market.

  • MIIMETIQ is an IoT/M2M framework, so this is the first step to setup to develop your vertical solution.
  • Framework: With a well defined architecture a framework is a set of functions ready to create any application. Everything else is open and adaptable.
  • MIIMETIQ architecture is service oriented and it uses AMQP as a message broker to connect the services.
  • MIIMETIQ has several modules, we define a module as a set of services. Basicly MIIMETIQ have 5 modules:
    • Identity Manager: manage users, groups, roles and all kind of entities the project needs and its security.
    • Assets Manager: a data model manager, the integrator creates the business logics and data models here.
    • Distribution System: this is a set of agnostitc connectivity layers to different types of devices.
    • A E N M: several time series and other signals flow through the AMQP, this data are events and using rules those events could be converted in alarms and some alarms have to be notified to proper services, systems or people.
    • Control Panel UI: this is an administration dashboard, in form of a UI to setup and monitor the most common uses of MIIMETIQ.
  • Using those modules usually the integrators create their own user interface to satisfy customer requiremests. In M2MCF we create those UI using ADUX (Advanced Development User Experience).
  • After configuring MIIMETIQ the integrator has 2 customized APIs to connect their code with MIIMETIQ. One of them is an API REST and another one is AMQP.
  • Finally everything inside MIIMETIQ could be customized, because the flexibility is very important when you have an horizontal solution.

Oct 21

OpenAM: some ssoadm commands for reference

Reading time: 3 – 4 minutes

OpenAM is as much powerful as complicated sometimes. In this case I spent a lot of time understanding how to set simple settings because of that I decide to take note about that in this blog entry.

First of all don’t forget to set the environment variables and go to ssoadm path:

export JAVA_HOME="/usr/lib/jvm/java-6-openjdk-amd64/jre"
export CLASSPATH="/var/lib/tomcat7/webapps/openam/WEB-INF/lib/policy-plugins.jar::/var/lib/tomcat7/webapps/openam/WEB-INF/lib/openam-core-11.0.0.jar"

cd /opt/openam/ssoadmin/openam/bin

Getting the list of user identities:

./ssoadm list-identities -u amadmin -f /tmp/oam.pwd -e / -t User -x "*"

anonymous (id=anonymous,ou=user,dc=openam)
demo (id=demo,ou=user,dc=openam)
serviceusername (id=serviceusername,ou=user,dc=openam)
amAdmin (id=amAdmin,ou=user,dc=openam)
Search of Identities of type User in realm, / succeeded.

another useful query would be:

./ssoadm list-identities -u amadmin -f /tmp/oam.pwd -e / -t Role -x "*"

No plug-ins configured for this operation

But as you can see it doesn’t work and I don’t know how to solve it.

Taking a look to GUI get to identities list with: Access Control > / (Top Level Realm) > Privileges

In this webpage you have a list of role identities, in my case I have only this: “All Authenticated Users”. Inside this identity I can set different privileges:

  • REST calls for Policy Evaluation (EntitlementRestAccess)
  • Read and write access to all log files (LogAdmin)
  • REST calls for searching entitlements (PrivilegeRestReadAccess)
  • Read access to all log files (LogRead)
  • Read and write access to all federation metadata configurations (FederationAdmin)
  • Read and write access only for policy properties (PolicyAdmin)
  • Read and write access to all configured Agents (AgentAdmin)
  • Read and write access to all realm and policy properties (RealmAdmin)
  • REST calls for managing entitlements (PrivilegeRestAccess)
  • Write access to all log files (LogWrite)

If we want to remove a privilege:

root@vm:/opt/openam/ssoadmin/openam/bin# ./ssoadm remove-privileges -u amAdmin -f /tmp/oam.pwd -e / -g EntitlementRestAccess -i "All Authenticated Users" -t role

Privileges were removed from identity, All Authenticated Users of type, role in realm, /.

or adding a privilege:

root@vm:/opt/openam/ssoadmin/openam/bin# ./ssoadm add-privileges -u amAdmin -f /tmp/oam.pwd -e / -g EntitlementRestAccess -i "All Authenticated Users" -t role

Talking about policies, exporting:

./ssoadm list-policies -u amadmin -f /tmp/oam.pwd -e / -o /tmp/policies.xml

and if we want to import the policies:

./ssoadm create-policies -u amAdmin -f /tmp/oam.pwd -e / --xmlfile /tmp/policies.xml

creating a user:

./ssoadm create-identity -u amadmin -f /tmp/oam.pwd  -e / -i serviceusername -t User --attributevalues "userpassword=servicepassword"

Useful references:

Jul 08

sslsnoop – hacking OpenSSH

Reading time: < 1 minute Using sslsnoop you can dump SSH keys used in a session and decode ciphered traffic. Supported algorithms are: aes128-ctr, aes192-ctr, aes256-ctr, blowfish-cbc, cast128-cbc. Basic sslsnoop information:

 $ sudo sslsnoop # try ssh, sshd and ssh-agent… for various things
 $ sudo sslsnoop-openssh live `pgrep ssh` # dumps SSH decrypted traffic in outputs/
 $ sudo sslsnoop-openssh offline –help # dumps SSH decrypted traffic in outputs/ from a pcap file
 $ sudo sslsnoop-openssl `pgrep ssh-agent` # dumps RSA and DSA keys

Take a look into the project in sslsnoop github page.

Jun 30

Enabling linux kernel to open LOTS of concurrent connections

Reading time: < 1 minute Just a small recipe about how to enable linux kernel to open tons of concurrent connections. Really simple and useful post entry.

echo “10152 65535″ > /proc/sys/net/ipv4/ip_local_port_range
sysctl -w fs.file-max=128000
sysctl -w net.ipv4.tcp_keepalive_time=300
sysctl -w net.core.somaxconn=250000
sysctl -w net.ipv4.tcp_max_syn_backlog=2500
sysctl -w net.core.netdev_max_backlog=2500
ulimit -n 10240

Jan 24

Conferència: La revolució dels mini-PC: Raspberry PI, Arduino i més

Reading time: 1 – 2 minutes

Ahir al vespre vaig fer una conferència a la FIB (Facultat d’Informàtica de Barcelona) dins de la UPC (Universitat Politècnica de Catalunya). En aquesta xerra vaig estar explicant què és i en que es diferència Arduino i Raspberry PI. A més de presentar tot un conjunt de solucions alternatives i experiències en el tema.

En aquest enllaç podeu trobar les transparències de:  La revolució dels mini-PC: Raspberry PI, Arduino i més. i el video el teniu disponible al servidor de la FIB.

Ara també teniu disponible el video a youtube:

i podeu veure les transparències des d’aquest mateix post:

Espero els vostres feedbacks als comentaris, desitjo que ús sigui útil.

Oct 04

Recording linux desktop and audio

Reading time: < 1 minute I have two full-HD displays as a desktop and I want to record the second one of them while I record mic too. The output format of the record has to be MKV with h264 as video codec and AAC as audio codec. After some tests with VLC and FFMPEG finally I get the solution with this command:

ffmpeg -f alsa -ac 2 -i default -f x11grab -r 15 -s 1920×1080+0+0 -i :0.0+1920,0 \
       -acodec pcm_s16le -vcodec libx264  -preset ultrafast  -threads 0 -y Test.mkv

when I finish recording the clip I have to convert the audio channel because if I try to convert the audio format while I’m recording the audio channel I have delays or sync problems with video channel.

ffmpeg -i Test.mkv -map 0:0 -map 0:1 \
       -c:v copy \
       -c:a:1 libfdk_aac -profile:a aac_he_v2 -b:a:1 96k \
       output.mkv

Sep 30

Hello World using ‘kombu’ library and python

This entry is part 4 of 4 in the series AMQP and RabbitMQ

Reading time: 4 – 7 minutes

Some times schemas and snippets don’t need large descriptions. If you think this is not enough in this case tell me and I’m going to add explanations.

Using a python library called kombu as an abstraction to talk with AMQP broker we are going to develop different message routes setting each type of Exchange. As a backend I used RabbitMQ with default configuration.

AMQP schema using an exchange of type direct

kombu-direct

Queue definition:

from kombu import Exchange, Queue

task_exchange = Exchange("msgs", type="direct")
queue_msg_1 = Queue("messages_1", task_exchange, routing_key = 'message_1')
queue_msg_2 = Queue("messages_2", task_exchange, routing_key = 'message_2')

The producer:

from __future__ import with_statement
from queues import task_exchange

from kombu.common import maybe_declare
from kombu.pools import producers


if __name__ == "__main__":
    from kombu import BrokerConnection

    connection = BrokerConnection("amqp://guest:guest@localhost:5672//")

    with producers[connection].acquire(block=True) as producer:
        maybe_declare(task_exchange, producer.channel)
        
        payload = {"type": "handshake", "content": "hello #1"}
        producer.publish(payload, exchange = 'msgs', serializer="pickle", routing_key = 'message_1')
        
        payload = {"type": "handshake", "content": "hello #2"}
        producer.publish(payload, exchange = 'msgs', serializer="pickle", routing_key = 'message_2')

One consumer:

from queues import queue_msg_1
from kombu.mixins import ConsumerMixin

class C(ConsumerMixin):
    def __init__(self, connection):
        self.connection = connection
        return
    
    def get_consumers(self, Consumer, channel):
        return [Consumer( queue_msg_1, callbacks = [ self.on_message ])]
    
    def on_message(self, body, message):
        print ("RECEIVED MSG - body: %r" % (body,))
        print ("RECEIVED MSG - message: %r" % (message,))
        message.ack()
        return
    

if __name__ == "__main__":
    from kombu import BrokerConnection
    from kombu.utils.debug import setup_logging
    
    setup_logging(loglevel="DEBUG")

    with BrokerConnection("amqp://guest:guest@localhost:5672//") as connection:
        try:
            C(connection).run()
        except KeyboardInterrupt:
            print("bye bye")


AMQP schema using an exchange of type fanout

kombu-fanout

Queue definition:

from kombu import Exchange, Queue

task_exchange = Exchange("ce", type="fanout")
queue_events_db = Queue("events.db", task_exchange)
queue_events_notify = Queue("events.notify", task_exchange)


The producer:

from __future__ import with_statement
from queues import task_exchange

from kombu.common import maybe_declare
from kombu.pools import producers


if __name__ == "__main__":
    from kombu import BrokerConnection

    connection = BrokerConnection("amqp://guest:guest@localhost:5672//")

    with producers[connection].acquire(block=True) as producer:
        maybe_declare(task_exchange, producer.channel)
        
        payload = {"operation": "create", "content": "the object"}
        producer.publish(payload, exchange = 'ce', serializer="pickle", routing_key = 'user.write')

        payload = {"operation": "update", "content": "updated fields", "id": "id of the object"}
        producer.publish(payload, exchange = 'ce', serializer="pickle", routing_key = 'user.write')

One consumer:

from queues import queue_events_db
from kombu.mixins import ConsumerMixin

class C(ConsumerMixin):
    def __init__(self, connection):
        self.connection = connection
        return
    
    def get_consumers(self, Consumer, channel):
        return [Consumer( queue_events_db, callbacks = [self.on_message])]
    
    def on_message(self, body, message):
        print ("save_db: RECEIVED MSG - body: %r" % (body,))
        print ("save_db: RECEIVED MSG - message: %r" % (message,))
        message.ack()
        return


if __name__ == "__main__":
    from kombu import BrokerConnection
    from kombu.utils.debug import setup_logging
    
    setup_logging(loglevel="DEBUG")

    with BrokerConnection("amqp://guest:guest@localhost:5672//") as connection:
        try:
            C(connection).run()
        except KeyboardInterrupt:
            print("bye bye")


AMQP schema using an exchange of type topic

kombu-topic

Queue definition:

from kombu import Exchange, Queue

task_exchange = Exchange("user", type="topic")
queue_user_write = Queue("user.write", task_exchange, routing_key = 'user.write')
queue_user_read = Queue("user.read", task_exchange, routing_key = 'user.read')
queue_notify = Queue("notify", task_exchange, routing_key = 'user.#')


The producer:

from __future__ import with_statement
from queues import task_exchange

from kombu.common import maybe_declare
from kombu.pools import producers


if __name__ == "__main__":
    from kombu import BrokerConnection

    connection = BrokerConnection("amqp://guest:guest@localhost:5672//")

    with producers[connection].acquire(block=True) as producer:
        maybe_declare(task_exchange, producer.channel)
        
        payload = {"operation": "create", "content": "the object"}
        producer.publish(payload, exchange = 'user', serializer="pickle", routing_key = 'user.write')

        payload = {"operation": "update", "content": "updated fields", "id": "id of the object"}
        producer.publish(payload, exchange = 'user', serializer="pickle", routing_key = 'user.write')

        payload = {"operation": "delete", "id": "id of the object"}
        producer.publish(payload, exchange = 'user', serializer="pickle", routing_key = 'user.write')

        payload = {"operation": "read", "id": "id of the object"}
        producer.publish(payload, exchange = 'user', serializer="pickle", routing_key = 'user.read')

One consumer:

from queues import queue_events_db
from kombu.mixins import ConsumerMixin

class C(ConsumerMixin):
    def __init__(self, connection):
        self.connection = connection
        return
    
    def get_consumers(self, Consumer, channel):
        return [Consumer( queue_events_db, callbacks = [self.on_message])]
    
    def on_message(self, body, message):
        print ("save_db: RECEIVED MSG - body: %r" % (body,))
        print ("save_db: RECEIVED MSG - message: %r" % (message,))
        message.ack()
        return


if __name__ == "__main__":
    from kombu import BrokerConnection
    from kombu.utils.debug import setup_logging
    
    setup_logging(loglevel="DEBUG")

    with BrokerConnection("amqp://guest:guest@localhost:5672//") as connection:
        try:
            C(connection).run()
        except KeyboardInterrupt:
            print("bye bye")


Simple queues

Kombu implements SimpleQueue and SimpleBuffer as simple solution for queues with exchange of type ‘direct’, with the same exchange name, routing key and queue name.

Pusher:

from kombu import BrokerConnection

connection = BrokerConnection("amqp://guest:guest@localhost:5672//")
queue = connection.SimpleQueue("logs")

payload = { "severity":"info", "message":"this is just a log", "ts":"2013/09/30T15:10:23" }
queue.put(payload, serializer='pickle')

queue.close()

Getter:

from kombu import BrokerConnection
from Queue import  Empty

connection = BrokerConnection("amqp://guest:guest@localhost:5672//")

queue = connection.SimpleQueue("logs")

while 1:
    try:
        message = queue.get(block=True, timeout=1)
        print message.payload
        message.ack()
    except Empty:
        pass
    except KeyboardInterrupt:
        break
    print message

queue.close()

The files

Download all example files: kombu-tests.tar.gz

Sep 25

Server send push notifications to client browser without polling

Reading time: 5 – 8 minutes

Nowadays last version of browsers support websockets and it’s a good a idea to use them to connect to server a permanent channel and receive push notifications from server. In this case I’m going to use Mosquitto (MQTT) server behind lighttpd with mod_websocket as notifications server. Mosquitto is a lightweight MQTT server programmed in C and very easy to set up. The best advantage to use MQTT is the possibility to create publish/subscriber queues and it’s very useful when you want to have more than one notification channel. As is usual in pub/sub services we can subscribe the client to a well-defined topic or we can use a pattern to subscribe to more than one topic. If you’re not familiarized with MQTT now it’s the best moment to read a little bit about because that interesting protocol. It’s not the purpose of this post to explain MQTT basics.

A few weeks ago I set up the next architecture just for testing that idea:

mqtt_schema

weboscket gateway to mosquitto mqtt server with javascrit mqtt client

The browser

Now it’s time to explain this proof of concept. HTML page will contain a simple Javascript code which calls mqttws31.js library from Paho. This Javascript code will connect to the server using secure websockets. It doesn’t have any other security measure for a while may be in next posts I’ll explain some interesting ideas to authenticate the websocket. At the end of the post you can download all source code and configuration files. But now it’s time to understand the most important parts of the client code.

client = new Messaging.Client("ns.example.tld", 443, "unique_client_id");
client.onConnectionLost = onConnectionLost;
client.onMessageArrived = onMessageArrived;
client.connect({onSuccess:onConnect, onFailure:onFailure, useSSL:true});

Last part is very simple, the client connects to the server and links some callbacks to defined functions. Pay attention to ‘useSSL’ connect option is used to force SSL connection with the server.

There are two specially interesting functions linked to callbacks, the first one is:

function onConnect() {
  client.subscribe("/news/+/sport", {qos:1,onSuccess:onSubscribe,onFailure:onSubscribeFailure});
}

As you can imagine this callback will be called when the connections is established, when it happens the client subscribes to all channels called ‘/news/+/sports’, for example, ‘/news/europe/sports/’ or ‘/news/usa/sports/’, etc. We can also use, something like ‘/news/#’ and it will say we want to subscribe to all channels which starts with ‘/news/’. If only want to subscribe to one channel put the full name of the channel on that parameter. Next parameter are dictionary with quality of service which is going to use and links two more callbacks.

The second interesting function to understand is:

function onMessageArrived(message) {
  console.log("onMessageArrived:"+message.payloadString);
};

It’s called when new message is received from the server and in this example, the message is printed in console with log method.

The server

I used an Ubuntu 12.04 server with next extra repositories:

# lighttpd + mod_webserver
deb http://ppa.launchpad.net/roger.light/ppa/ubuntu precise main
deb-src http://ppa.launchpad.net/roger.light/ppa/ubuntu precise main

# mosquitto
deb http://ppa.launchpad.net/mosquitto-dev/mosquitto-ppa/ubuntu precise main
deb-src http://ppa.launchpad.net/mosquitto-dev/mosquitto-ppa/ubuntu precise main

With these new repositories you can install required packages:

apt-get install lighttpd lighttpd-mod-websocket mosquitto mosquitto-clients

After installation it’s very easy to run mosquitto in test mode, use a console for that and write the command: mosquitto, we have to see something like this:

# mosquitto
1379873664: mosquitto version 1.2.1 (build date 2013-09-19 22:18:02+0000) starting
1379873664: Using default config.
1379873664: Opening ipv4 listen socket on port 1883.
1379873664: Opening ipv6 listen socket on port 1883.

The configuration file for lighttpd in testing is:

server.modules = (
        "mod_websocket",
)

websocket.server = (
        "/mqtt" => ( 
                "host" => "127.0.0.1",
                "port" => "1883",
                "type" => "bin",
                "subproto" => "mqttv3.1"
        ),
)

server.document-root        = "/var/www"
server.upload-dirs          = ( "/var/cache/lighttpd/uploads" )
server.errorlog             = "/var/log/lighttpd/error.log"
server.pid-file             = "/var/run/lighttpd.pid"
server.username             = "www-data"
server.groupname            = "www-data"
server.port                 = 80

$SERVER["socket"] == ":443" {
    ssl.engine = "enable" 
    ssl.pemfile = "/etc/lighttpd/certs/sample-certificate.pem" 
    server.name = "ns.example.tld"
}

Remember to change ‘ssl.pemfile’ for your real certificate file and ‘server.name’ for your real server name. Then restart the lighttpd and validate SSL configuration using something like:

openssl s_client -host ns.example.tld -port 443

You have to see SSL negotiation and then you can try to send HTTP commands, for example: “GET / HTTP/1.0” or something like this. Now the server is ready.

The Test

Now you have to load the HTML test page in your browser and validate how the connections is getting the server and then how the mosquitto console says how it receives the connection. Of course, you can modify the Javascript code to print more log information and follow how the client is connected to MQTT server and how it is subscribed to the topic pattern.

If you want to publish something in MQTT server we could use the CLI, with a command mosquitto_pub:

mosquitto_pub -h ns.example.tld -t '/news/europe/sport' -m 'this is the message about european sports'

Take a look in your browser Javascript consle you have to see how the client prints the message on it. If it fails, review the steps and debug each one to solve the problem. If you need help leave me a message. Of course, you can use many different ways to publish messages, for example, you could use python code to publish messages in MQTT server. In the same way you could subscribe not only browsers to topics, for example, you could subscribe a python code:

import mosquitto

def on_connect(mosq, obj, rc):
    print("rc: "+str(rc))

def on_message(mosq, obj, msg):
    print(msg.topic+" "+str(msg.qos)+" "+str(msg.payload))

def on_publish(mosq, obj, mid):
    print("mid: "+str(mid))

def on_subscribe(mosq, obj, mid, granted_qos):
    print("Subscribed: "+str(mid)+" "+str(granted_qos))

def on_log(mosq, obj, level, string):
    print(string)

mqttc = mosquitto.Mosquitto("the_client_id")
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_publish = on_publish
mqttc.on_subscribe = on_subscribe

mqttc.connect("ns.example.tld", 1883, 60)
mqttc.subscribe("/news/+/sport", 0)

rc = 0
while rc == 0:
    rc = mqttc.loop()

Pay attention to server port, it isn’t the ‘https’ port (443/tcp) because now the code is using a real MQTT client. The websocket gateway isn’t needed.

The files

  • mqtt.tar.gz – inside this tar.gz you can find all referenced files

Sep 20

How to get MP3 file from a WebM video

Reading time: < 1 minute Another title for this post could be: "Getting audio from video clips". Because you could do it with MP4 (Mpeg4), WebM, Mov, FLV, etc. We are going to use ffmpeg to that:

ffmpeg -i input_file.webm -ab 128k -ar 44100 out_file.mp3

The meaning of the parameters:

  • ab: the audio bitrate in bps
  • ar: the audio sample rate in hz

And if you have a directory with a lot of files to convert you could use:

find . -name "*.webm" -print0 |while read -d $'\0' file; do ffmpeg -i "$file" -ab 128k -ar 44100 -y "${file%.webm}.mp3";done

Pay attention to “find” and “while read” commands combinations because we want to support files with spaces.

I hope this is as useful for you as for me.