Free Software http://blog.stone-head.org an act of subversive playful cleverness Fri, 25 Mar 2016 23:32:25 +0000 en-US hourly 1 Apache Phoenix for Cloudera CDH http://blog.stone-head.org/apache-phoenix-for-cloudera-cdh/ http://blog.stone-head.org/apache-phoenix-for-cloudera-cdh/#respond Sat, 20 Dec 2014 15:40:28 +0000 http://blog.stone-head.org/?p=1131 Related posts:
  1. Getting Movistar Peru ZTE MF193 work in Debian GNU/Linux How to get Movistar Peru (Internet Móvil) ZTE 3G MF193...
]]>
is a relational database layer over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets.

What the above statement means for developers or data scientists is that you can “talk” SQL to your HBase cluster. Sounds good right? Setting up Phoenix on Cloudera CDH can be really frustrating and time-consuming. I wrapped-up references from across the web with my own findings to have both play nice.

Building Apache Phoenix

Because of dependency mismatch for the pre-built binaries, supporting Cloudera’s CDH requires to build Phoenix using the versions that match the CDH deployment. The CDH version I used is CDH4.7.0. This guide applies for any version of CDH4+.

Note: You can find CDH components version in the “CDH Packaging and Tarball Information” section for the “Cloudera Release Guide”. Current release information (CDH5.2.1) is available in this .

Preparing Phoenix build environment

Phoenix can be built using maven or gradle. General instructions can be found in the “” webpage.

Before building Phoenix you need to have:

  • JDK v6 (or v7 depending which CDH version are you willing to support)
  • Maven 3
  • git

Checkout correct Phoenix branch

Phoenix has two major release versions:

  • 3.x – supports HBase 0.94.x   (Available on CDH4 and previous versions)
  • 4.x – supports HBase 0.98.1+ (Available since CDH5)

Clone the Phoenix git repository

git clone https://github.com/apache/phoenix.git

Work with the correct branch

git fetch origin
git checkout 3.2

Modify dependencies to match CDH

Before building Phoenix, you will need to modify the dependencies to match the version of CDH you are trying to support. Edit phoenix/pom.xml and do the following changes:

Add Cloudera’s Maven repository

+    <repository>
+        <id>cloudera</id>
+        https://repository.cloudera.com/artifactory/cloudera-repos/
+    </repository>

Change component versions to match CDH’s.

     
-    <hadoop-one.version>1.0.4</hadoop-one.version>
-    <hadoop-two.version>2.0.4-alpha</hadoop-two.version>
+    <hadoop-one.version>2.0.0-mr1-cdh4.7.0</hadoop-one.version>
+    <hadoop-two.version>2.0.0-cdh4.7.0</hadoop-two.version>
     <!-- Dependency versions -->
-    <hbase.version>0.94.19
+    <hbase.version>0.94.15-cdh4.7.0
     <commons-cli.version>1.2</commons-cli.version>
-    <hadoop.version>1.0.4
+    <hadoop.version>2.0.0-cdh4.7.0
     <pig.version>0.12.0</pig.version>
     <jackson.version>1.8.8</jackson.version>
     <antlr.version>3.5</antlr.version>
     <log4j.version>1.2.16</log4j.version>
     <slf4j-api.version>1.4.3.jar</slf4j-api.version>
     <slf4j-log4j.version>1.4.3</slf4j-log4j.version>
-    <protobuf-java.version>2.4.0</protobuf-java.version>
+    <protobuf-java.version>2.4.0a</protobuf-java.version>
     <commons-configuration.version>1.6</commons-configuration.version>
     <commons-io.version>2.1</commons-io.version>
     <commons-lang.version>2.5</commons-lang.version>

Change target version, only if you are building for Java 6. CDH4 is built for JRE 6.

           <artifactId>maven-compiler-plugin</artifactId>
           <version>3.0</version>
           <configuration>
-            <source>1.7</source>
-            <target>1.7</target>
+            <source>1.6</source>
+            <target>1.6</target>
           </configuration>

Phoenix building

Once, you have made the changes you are set to build Phoenix. Our CDH4.7.0 cluster uses Hadoop 2, so make sure to activate the hadoop2 profile.

mvn package -DskipTests -Dhadoop.profile=2

If everything goes well, you should see the following result:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Phoenix .................................... SUCCESS [2.729s]
[INFO] Phoenix Hadoop Compatibility ...................... SUCCESS [0.882s]
[INFO] Phoenix Core ...................................... SUCCESS [24.040s]
[INFO] Phoenix - Flume ................................... SUCCESS [1.679s]
[INFO] Phoenix - Pig ..................................... SUCCESS [1.741s]
[INFO] Phoenix Hadoop2 Compatibility ..................... SUCCESS [0.200s]
[INFO] Phoenix Assembly .................................. SUCCESS [30.176s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:02.186s
[INFO] Finished at: Mon Dec 15 13:18:48 PET 2014
[INFO] Final Memory: 45M/1330M
[INFO] ------------------------------------------------------------------------

Phoenix Server component deployment

Since Phoenix is a JDBC layer on top of HBase a server component has to be deployed on every HBase node. The goal is to have Phoenix server component added to HBase classpath.

You can achieve this goal either by copying the server component directly to HBase’s lib directory, or copy the component to an alternative path then modify HBase classpath definition.

For the first approach, do:

cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/cloudera/parcels/CDH/lib/hbase/lib/

Note: In this case CDH is a synlink to the current active CDH version.

For the second approach, do:

cp phoenix-assembly/target/phoenix-3.2.3-SNAPSHOT-server.jar /opt/phoenix/

Then add the following line to /etc/hbase/conf/hbase-env.sh

/etc/hbase/conf/hbase-env.sh
export HBASE_CLASSPATH_PREFIX=/opt/phoenix/phoenix-3.2.3-SNAPSHOT-server.jar

Wether you’ve used any of the methods, you have to restart HBase. If you are using Cloudera Manager, restart the HBase service.

To validate that Phoenix is on HBase class path, do:

sudo -u hbase hbase classpath | tr ':' '\n' | grep phoenix

Phoenix server validation

Phoenix provides a set of client tools that you can use to validate the server component functioning. However, since we are supporting CDH4.7.0 we’ll need to make few changes to such utilities so they use the correct dependencies.

phoenix/bin/sqlline.py:

sqlline.py is a wrapper for the JDBC client, it provides a SQL console interface to HBase through Phoenix.

index f48e527..bf06148 100755
--- a/bin/sqlline.py
+++ b/bin/sqlline.py
@@ -53,7 +53,8 @@ colorSetting = "true"
 if os.name == 'nt':
     colorSetting = "false"
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
     '" -Dlog4j.configuration=file:' + \
     os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
     " sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver \

phoenix/bin/psql.py:

psql.py is a wrapper tool that can be used to create and populate HBase tables.

index 34a95df..b61fde4 100755
--- a/bin/psql.py
+++ b/bin/psql.py
@@ -34,7 +34,8 @@ else:
 # HBase configuration folder path (where hbase-site.xml reside) for
 # HBase/Phoenix client side property override
-java_cmd = 'java -cp "' + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
+extrajars="/opt/cloudera/parcels/CDH/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.0.0-cdh4.7.0.jar:/opt/cloudera/parcerls/CDH/lib/oozie/libserver/hbase-0.94.15-cdh4.7.0.jar"
+java_cmd = 'java -cp ".' + os.pathsep + extrajars + os.pathsep + phoenix_utils.hbase_conf_path + os.pathsep + phoenix_utils.phoenix_client_jar + \
     '" -Dlog4j.configuration=file:' + \
     os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
     " org.apache.phoenix.util.PhoenixRuntime " + args

After you have done such changes you can test connectivity by issuing the following commands:

./bin/sqlline.py zookeeper.local
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:zookeeper.local none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:zookeeper.local
14/12/16 19:26:10 WARN conf.Configuration: dfs.df.interval is deprecated. Instead, use fs.df.interval
14/12/16 19:26:10 WARN conf.Configuration: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
14/12/16 19:26:10 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:10 WARN conf.Configuration: topology.script.number.args is deprecated. Instead, use net.topology.script.number.args
14/12/16 19:26:10 WARN conf.Configuration: dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode
14/12/16 19:26:10 WARN conf.Configuration: topology.node.switch.mapping.impl is deprecated. Instead, use net.topology.node.switch.mapping.impl
14/12/16 19:26:11 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
14/12/16 19:26:12 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS
Connected to: Phoenix (version 3.2)
Driver: PhoenixEmbeddedDriver (version 3.2)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
77/77 (100%) Done
Done
sqlline version 1.1.2
0: jdbc:phoenix:zookeeper.local>

Then, you can either issue SQL-commands or Phoenix-commands.

0: jdbc:phoenix:zookeeper.local> !tables
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
|                TABLE_CAT                 |               TABLE_SCHEM                |                TABLE_NAME                |                TABLE_TYPE |
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
| null                                     | SYSTEM                                   | CATALOG                                  | SYSTEM TABLE              |
| null                                     | SYSTEM                                   | SEQUENCE                                 | SYSTEM TABLE              |
| null                                     | SYSTEM                                   | STATS                                    | SYSTEM TABLE              |
| null                                     | null                                     | STOCK_SYMBOL                             | TABLE                     |
| null                                     | null                                     | WEB_STAT                                 | TABLE                     |
+------------------------------------------+------------------------------------------+------------------------------------------+---------------------------+
]]>
http://blog.stone-head.org/apache-phoenix-for-cloudera-cdh/feed/ 0
Getting Movistar Peru ZTE MF193 work in Debian GNU/Linux http://blog.stone-head.org/getting-movistar-peru-zte-mf193-work-in-debian/ http://blog.stone-head.org/getting-movistar-peru-zte-mf193-work-in-debian/#comments Fri, 14 Feb 2014 02:16:14 +0000 http://blog.stone-head.org/?p=956 Related posts:
  1. No SSH for you – how to fix OpenRG routers I find interesting when faced a taken-for-granted situation, in particular...
  2. GNU no es Unix Se aproximaba el final del año 1996 u 1997 cuando...
  3. Compute clusters for Debian development and building – final report Compute clusters for Debian development and building – final report...
]]>
After so many attempts to have my shiny Movistar Peru (Internet Móvil) 3G ZTE MF193 modem to work out-of-the-box in Debian jessie (unstable) with NetworkManager, the word frustration was hitting on my head. Even trying to do , led me to craziness. I gave up on fanzines and decided to take the old-school route. Release wvdial and friends!

Trying different combinations for wvdial.conf was no heaven for sure, but I’ve found this wonderful from Vienna, Austria! that really made a difference. Of course he’s talking about the model MF180 but you get the idea. So I’m sharing what was different for the MF193.

Basically, I’ve done the eject and disable CD-ROM thing already, but still no progress. I’ve also tried using wvdial to send AT commands to the evasive /dev/ttyUSBX device. Starting from scratch confirmed that I’ve done such things properly indeed. I was amused by the fact that I could use screen to talk to the modem! (yo, all the time wasted trying to have minicom and friends play nice)

So, let’s get to the point. After following this procedure, you should be able to use NetworkManager to connect to the Interwebs using the 3G data service from Movistar Peru.

  1. Step 1 – follow the guide
  2. Step 2 – Here I had to use /dev/ttyUSB4
  3. Step 3 – follow the guide
  4. Unplug your USB modem
  5. Plug your USB modem. This time you should see only /dev/ttyUSB{0,1,2} and /dev/gsmmodem should be missing (not sure if this is a bug). Now /dev/ttyUSB2 is your guy.
  6. Step 4 – use /dev/ttyUSB2
  7. Run wvdial from CLI – it should connect successfully.
  8. Stop wvdial
  9. Select the Network icon on GNOME3, click on the Mobile Broadband configuration you have, if not create one.
  10. Voilá. Happy surfing!

I’m pasting my wvdial.conf, just in case.

[Dialer Defaults]
Modem = /dev/ttyUSB2
Username = movistar@datos
Password = movistar
APN = movistar.pe
Phone = *99#
Stupid Mode = 1
Init2 = AT+CGDCONT=4,"IP","movistar.pe"
]]>
http://blog.stone-head.org/getting-movistar-peru-zte-mf193-work-in-debian/feed/ 2
What have we done? http://blog.stone-head.org/what-have-we-done/ http://blog.stone-head.org/what-have-we-done/#comments Thu, 30 Jan 2014 17:52:40 +0000 http://blog.stone-head.org/?p=946 Related posts:
  1. Too open stack Reflections about the OpenStack governance and development model and how...
  2. debian – fostering innovation Latest months have been of great improvement and empowerment on...
  3. FLISOL Puno Past Saturday April 28 I attended to the FLISOL 2012...
]]>
Couple of weeks ago I was in the situation of having to setup up a new laptop. I decided to go with wheezy’s DVD installer. So far so good. I didn’t expect (somewhere in the lies-I-tell-myself dept.) to have GNOME as the default Debian desktop. However after install I’ve figured it was the new GNOME3 everybody was talking about. Before that I’ve seen it in Ubuntu systems used by my classmates. I thought yeah, OK GNOME3, I think that’s fine for them as a Linux desktop. Turns out that I’ve started using the Debian desktop aka GNOME3 and noticed that it was not as bloated as the Ubuntu desktop I’ve seen before looked, so I sticked with it, (I thought for a while).

Turns out that I did like this new so called GNOME3, the non-window based but an  application-based system (that is something that sticks in my head). I liked the  way it makes sense as a desktop system, like when you look for applications, documents, connect to networks, use pluggable devices or just configure stuff every time with less and less effort. Good practices and concepts learned from Mac OS X-like environments and for sure taking advantage of the new features the Linux kernel and user-space environment got over the years. So, like one month later I stick with it and it makes sense for me to keep it. I had no chance to try the latest XFCE or KDE, my default choices before this experience. Kudos GNOME team, even after the depictions you guys had on GNOME Shell; as I learned.

This whole situation got me into some pondering about the past of the Linux user experience and how we in the community lead people into joining. I remember that when I guy asked: how do I configure this new monitor/VGA card/network card/Etc? the answer was in the lines of: what is the exact chipset model and specific whole product code number that your device has? Putting myself in the shoes of such people or today’s people I’d say: what are you talking about? what it is a chipset? I mean, like it was too technical that only one guy with more than average knowledge could grasp. From a product perspective this is similar for a car manufacturer tell to a customer to look for the exact layout or design your car’s engine has, so that they are able to tell whether is the 82’s model A or 83’s model C. Simplicity on naming and identification was not in the mindset of most of us.

This is funny because as technology advances it also becomes more transparent to the customer. So, for instance, today’s kids can become really power users of any new technology as if they had, indeed, many pre-set chipsets in their brain. But when going into the details we had to grasp few years ago they have some hard time figuring out the complexity of the product that presents itself on this clean and simple interface. New times, interesting times. Always good to look back. No, I’m not getting old.

]]>
http://blog.stone-head.org/what-have-we-done/feed/ 4
Too open stack http://blog.stone-head.org/too-open-stack/ http://blog.stone-head.org/too-open-stack/#respond Wed, 31 Jul 2013 06:07:37 +0000 http://blog.stone-head.org/?p=878 Related posts:
  1. What have we done? Couple of weeks ago I was in the situation of...
  2. Startup Peru: The Peruvian government startup funding program details revealed Peruvian Ministry for Industry has disclosed details for the Startup...
  3. On Sales A reflection about how the sales process is conducted on...
]]>
has been in the news recently. Despite critics and chitchat about API compatibility, lack of strong vendor, etc., there is one thing most people, if any, is not noticing. I’m going to talk from a vendor perspective rather than from community’s. The problem I see here is confusion.

To me, one of the main reasons that feeds the confusion is the lack of understanding of what Openstack really is. I’ve been involved in the free software / open source community for more than 12 years and I’ve seen many stories about how a community struggles internally and despite of that manages to release high quality software. But what I see here is a community that develops an end-user product with a governance model that involves companies as stakeholders, that are trying to figure out how to leverage this product in a way that preserves branding and their customer’s loyalty. Meaning how to sell it in a way that other companies cannot.

Most people are familiar with the Linux kernel model and it’s governance. It works very well having Linus as the benevolent dictator. Even more, most contributions to Linux’s code come from vendor-sponsored developers or employees. However, the difference of this model with Openstack’s is that vendor’s interest on Linux’s development is not related to their business model or end-user product / service differentiation. That’s why the Linux Foundation government model works pretty well for the community and for vendors.

The issues that Openstack is experiencing on the market, in my opinion, are related to the fact that the stakeholders are all trying to figure out a model that will allow a end-user product to be developed by a community but also have independence on their distribution and offering differentiation to compete on the market with the other stakeholders but also with established commercial and open-source offerings. Is quite hard to become a strong vendor for an ecosystem that is actually an end-user product.

]]>
http://blog.stone-head.org/too-open-stack/feed/ 0
Artículo – ¿Es el software libre un modelo para ganar dinero? http://blog.stone-head.org/articulo-es-el-software-libre-un-modelo-para-ganar-dinero/ http://blog.stone-head.org/articulo-es-el-software-libre-un-modelo-para-ganar-dinero/#comments Wed, 13 Feb 2013 15:06:11 +0000 http://blog.stone-head.org/?p=794 Related posts:
  1. Artículo: Financiamiento para acelerar tu proyecto en Perú Disclaimer: Este post está escrito en español, puesto que el...
  2. Software libre: ¿Tenemos claro de que se trata? Cuando Richard Stallman, graduado en física por la universidad de...
  3. Aprendiendo del ecosistema de software libre El pasado 24 de junio a invitación del Chapter Lima...
]]>
Para muchos es claro que el software libre ha logrado ser visto por   la sociedad como una alternativa en la incorporación y uso de la tecnología. Cientos de personas alrededor del mundo contribuyen a su desarrollo. Sin embargo, la pregunta ¿es un modelo viable para  ganar dinero? parece no tener respuestas claras.

En la segunda edición de , una revista editada por la , publiqué un artículo en donde me propuse, a partir del concepto de “interés público”, explicar si el software libre representa un medio adecuado para lograr un fin de interés particular: ganar dinero.

El concepto de interés público está muy ligado al concepto de software libre. Las principales licencias de software libre, como la GNU GPL, buscan proteger las reglas que la gobiernan (copiar, modificar, redistribuir y preservar el modelo) y no necesariamente los intereses del desarrollador o productor de software. Es decir, el valor del modelo está en las reglas que la gobiernan y no en el resultado (software, etc). Es este punto el que genera un conflicto entre el interés público y el interés particular o personal, como puede ser el ganar dinero en base la creación de software.

Pueden leer el artículo en la página 19. ¡Los comentarios son bienvenidos!

]]>
http://blog.stone-head.org/articulo-es-el-software-libre-un-modelo-para-ganar-dinero/feed/ 2
Puppet weird SSL error: SSL_read:: pkcs1 padding too short http://blog.stone-head.org/puppet-weird-ssl-error-ssl_read-pkcs1-padding-too-short/ http://blog.stone-head.org/puppet-weird-ssl-error-ssl_read-pkcs1-padding-too-short/#respond Tue, 11 Dec 2012 11:30:36 +0000 http://blog.stone-head.org/?p=730 No related posts. ]]> While setting a puppet agent to talk to my puppetmaster I’ve got this weird SSL error:

Error: SSL_read:: pkcs1 padding too short

Debugging both on the agent and master sides didn’t offered much information.

On the master:

puppet master --no-daemonize --debug

On the agent:

puppet agent --test --debug

Although I had my master using 3.0.1 and the tested agent using 2.7, the problem didn’t look related to that. People @ #puppet also haven’t seen this error before.

I figured the problem reduced to an issue with openssl. So, I checked versions and there I got! agent’s openssl is using version 1.0.0j-1.43.amzn1 and master’s was using openssl-0.9.8b-10.el5_2.1 So I upgraded master’s openssl to openssl.i686 0:0.9.8e-22.el5_8.4 and voilá, the problem is gone.

I learned that there has been a in OpenSSL’s VCS that is apparently related to the issue. Hope this helps if you got into the described situation.

]]>
http://blog.stone-head.org/puppet-weird-ssl-error-ssl_read-pkcs1-padding-too-short/feed/ 0
MySQL data enconding conversion latin1 utf8 http://blog.stone-head.org/mysql-data-enconding-conversion-latin1-utf8/ http://blog.stone-head.org/mysql-data-enconding-conversion-latin1-utf8/#respond Mon, 08 Oct 2012 04:11:46 +0000 http://blog.stone-head.org/?p=568 Related posts:
  1. svn changelist tutorial Not few times I’ve come upon the situation where I’m...
]]>
I was about to post a different story but things turned out differently, for good. Now I’ll post tips that may save the day in the event of encoding issues when dealing with MySQL table data. Hopefully you find it useful.

Say you have a MySQL table that has collation set to latin1 but you have utf8-encoded data stored. This happened when importing a dump, or because of Murphy’s law. When querying the data from your application or from mysql’s tools you get awful results for non-ascii characters such tildes. Changing the table’s collation definition will only work for new columns and records. Altering the collation for the said column would make things worse. So what to do?

Fortunately MySQL offers a nice way to save the day. You can use a transition conversion and then set the table encoding back to utf8, so it pairs accordingly with the encoding type for the data it has stored.

ALTER TABLE table_name CHANGE column_name column_name BLOB;

Issuing the previous command will make the table change it’s type to BLOB. Why? Because by doing it the stored data is kept untouched. This is important. If you use the CONVERT or MODIFY commands the data will be converted by MySQL. We don’t want that since the data is already in the encoding type we want.

You can note that column_name is repeated twice, this is because the CHANGE command copies the data and we want the resulting column to have the same name.

Now, we have to put the data back on the encoding we want, so everything is under control again. This is done by issuing the following command.

ALTER TABLE table_name CHANGE column_name column_name COLUMN_TYPE CHARACTER SET utf8;

COLUMN_TYPE depends on the content’s column type. For instance, for a TEXT column it should be.

ALTER TABLE table_name CHANGE column_name column_name TEXT CHARACTER SET utf8;

It works for VARCHAR, LONGTEXT and other data types that are used to store characters.

]]>
http://blog.stone-head.org/mysql-data-enconding-conversion-latin1-utf8/feed/ 0
La importancia de la historia http://blog.stone-head.org/la-importancia-de-la-historia/ http://blog.stone-head.org/la-importancia-de-la-historia/#respond Tue, 24 Jul 2012 09:41:27 +0000 http://blog.stone-head.org/?p=511 Related posts:
  1. Día internacional de la privacidad – 28 enero El 28 de Enero del 2009, muchos países de Europa...
  2. Historia de la Informática en América Latina En la Conferencia Lationamericana de Informática (CLEI) del año 2009,...
  3. Problemas en gestión de proyectos de software Los problemas que he observado en los proyectos de software...
]]>
Es el momento del día donde estoy en el humor de escribir. Ciertamente este post será un poco raro puesto que colocare básicamente lo que pienso, sin edición. Update: Editado y corregido.

La idea de Internet como un medio para compartir información ha sido una de los mayores disrupciones respecto a la cantidad de información disponible públicamente para la humanidad. Internet tiene, por esto, una gran particularidad: todo lo que registra lo archiva, y no lo olvida. Para muchos, incluyendome, es más eficiente usarla para ubicar cierta información a través de un buscador que recordar el contenido de la misma.
Los desarrolladores de navegadores de Internet tenían claro esto al momento de implementar las funciones de usuario (bookmark) y el protocolo HTTP (hipervínculos, redirecciones, etc). Lo mismo del lado de los que crearon los servidores web.

Sin embargo, muchos usuarios, y lamentablemente algunos técnicos, parecen dejarse llevar por ciertas normas o pseudo hábitos que promueven que la mejor política es el borrón y cuenta nueva. Cuando ese hecho ocurre en Internet la comunidad de sus usuarios deja de beneficiarse de una de sus principales virtudes: cuando visito a un URL sé que su dirección permanecerá siempre, o, salvo casos extremos, seré dirigido de forma transparente al nuevo lugar donde esta el contenido, es decir la información. La Internet, en particular la web, se fundamenta en este principio básico a través de los hipervínculos. Los motores de búsqueda confían en la validez de estos, y los usuarios confían en la información que proveen los buscadores. Es decir existe ya una cadena de confianza intrínseca y fortalecida de manera virtuosa.

Con motivo de un trabajo que estoy desarrollando, hace poco estuve revisando las publicaciones sobre eventos y temas de software libre realizados en Perú. Con mucha pena veo que muchos de los sitios que originalmente publicaron la información han sufrido cambios tan drásticos (un reemplazo total del sitio) o cambios funcionales (ahora los contenidos se generan usando otro esquema de urls) que me ha sido difícil llegar a ubicar información que es valiosa y, sobretodo, esta vinculada desde muchos sitios. Casos como la web de APESOL que tiene, tenía, información sobre las actividades, eventos, iniciativas y sucesos de importancia para la comunidad ahora ya no están disponibles, lo mismo con la web de FOPECAL, que fue un esfuerzo importante y serio para vincular a los actores en el tema y promovió y ejecuto los únicos congresos nacionales y concursos de software libre conocidos a la fecha.

Se comenta en muchos círculos de humanidades que si uno no puede acceder a la historia uno no puede entender los eventos que se la originaron para en base a estos analizar la situación actual. En el caso del software libre los eventos que acontecieron y nos han conducido a este punto son importantes de preservar para las nuevas generaciones y, como señalo al inicio del párrafo, se aprenda y no “estén condenados a repetir su historia”. Aún mas allá, independientemente de lo anterior, para una persona usuaria de Internet que busca información puntual el hecho que se presenten estos sucesos es un punto menos para que entienda los beneficios que otorga.

Debo hacer una salvedad en dos puntos. Siempre es posible hacer cambios funcionales y preservar la historia (dirección y contenido) cuando se hacen las cosas teniendo en cuenta a las personas. Esto lo he visto más de una vez en diarios, proyectos de software libre internacionales, gobiernos, universidades, entre otros. En estas modificaciones veo que dichas organizaciones o personas entienden la Internet y valoran a sus usuarios, que confían en ese recurso publicado. Dos, en Perú este fenómeno no es único de la comunidad, también se ve en empresas, webs del gobierno, etc. Quizá un sociólogo o psiquiatra pueda explicar mejor el por qué. Yo visitare al segundo por siacaso.

]]>
http://blog.stone-head.org/la-importancia-de-la-historia/feed/ 0
svn changelist tutorial http://blog.stone-head.org/svn-changelist/ http://blog.stone-head.org/svn-changelist/#comments Sat, 14 Jul 2012 01:42:09 +0000 http://blog.stone-head.org/?p=500 Related posts:
  1. Compute Clusters Integration for Debian Development and Building – Report 4 Hello, this is the fourth report for the project. First,...
  2. Subversion auth using SSH external key-pair Usually, when using Subversion’s SSH authentication facility, Subversion’s client will...
  3. s3tools – Simple Storage Service for non-Amazon providers Using s3tools to interact with a S3-compatible storage service for...
]]>
Not few times I’ve come upon the situation where I’m working over a Subversion working copy locally and I wanted to commit bits of what I’ve already baked and ready to go. Odds are that there’s only one file not belonging to such changeset but mostly there are many files scattered around.

This situation, I learned, can be managed with a lot more flexibility and fun.
svn changelist is a nice tool for such situations. svn changelist allows you to create a list of “files I want to do something with” meaning that you can use the changelist on your usual svn operations, read commit, remove, etc.

Using svn changelist

Let’s suppose you start with a project myapp, which is the next killer app. Then you start creating three files A, B and C.

dragon% touch A B C
dragon% ls -lh
total 0
-rw-r--r-- 1 rudy rudy 0 Jul 13 20:16 A
-rw-r--r-- 1 rudy rudy 0 Jul 13 20:16 B
-rw-r--r-- 1 rudy rudy 0 Jul 13 20:16 C

You want to use version control for those files, of course, so:

dragon% svn add A B C
A A
A B
A C
dragon% svn ci "first commit of A, B and C" .
Adding A
Adding B
Adding C
Transmitting file data ...
Committed revision 1.

Now you want to edit A and add few pieces of text. Then you will edit B next.

dragon% echo "This file looks empty, so let's put some text inside it" > A
dragon%
dragon% echo "I'm afraid this one also is empty. Just in case" >> B

Let’s say you need to work on the file C and that implies touching B too. You don’t want to commit the changes you’ve just made since you haven’t made up my mind about them.

dragon% echo "C is a nice name for a source file, but this one doesn't look like one does it?" > C
dragon% echo "I'm going to add few lines to B too" >> B
dragon% svn st
M A
M B
M C

So, let’s suppose you are ready to hit the enter key and commit the 2nd revision. But you are not done with A, so you want to commit just B and C the way they are at this point. Here svn changelist enters the arena.

Creating a changelist

svn changelist can be used to create a list for the two files and do several tasks related to them. Let’s see. Let’s start adding both B and C to our new changelist called mylist.

dragon% svn changelist mylist B C
Path 'B' is now a member of changelist 'mylist'.
Path 'C' is now a member of changelist 'mylist'.

Now you have a changelist named mylist that you can do operations with. For instance:

dragon% svn st
M A

--- Changelist 'mylist':
M B
M C
dragon% svn diff --changelist mylist
Index: B
===================================================================
--- B (revision 1)
+++ B (working copy)
@@ -0,0 +1,2 @@
+I'm afraid this one also is empty. Just in case
+I'm going to add few lines to B too
Index: C
===================================================================
--- C (revision 1)
+++ C (working copy)
@@ -0,0 +1 @@
+C is a nice name for a source file, but this one doesn't look like one does it?

Now let’s commit the list.

dragon% svn ci -m "Comitting files from the mylist changelist" --changelist mylist
Sending B
Sending C
Transmitting file data ..
Committed revision 2.
dragon% svn st
M A

Now that you’ve checked in the files, the changelist mylist vanishes from our working copy. You  cannot do operationg with it any longer, since it does not exists anymore.

Wrapping up

Changelists are a great tool to group certain files and do operations with them. You can think of them as a sort of alias for a set of file. When using changelists you have to consider few things. Changelists are created as a one-time list, that means you cannot add another file to an existing list. If you type svn changelist mylist A it will define the changelist mylist to have just one member, A. It will not add A to the existing member files, B and C.

In the other hand, you are free to remove any file from an existing changelist using svn changelist mylist --remove. For our previous example it could be: svn changelist mylist --remove B. Such command will make the changelist shrink to just have one element, C. You can also have as many changelists as you want, and member files can be associated to each one of them. Use case for that could be a diff over a set of specific files.

Probably for the project we’ve setup to walk you through this feature it almost makes no difference to use this feature, but when you have lots of files sitting modified on your working copy, this feature will keep your hair on it’s place, and maybe save the day. svn changelist is available since Subversion 1.5.

]]>
http://blog.stone-head.org/svn-changelist/feed/ 6
FLISOL Puno http://blog.stone-head.org/flisol-puno/ http://blog.stone-head.org/flisol-puno/#comments Thu, 03 May 2012 02:34:17 +0000 http://blog.stone-head.org/?p=428 Related posts:
  1. debian – fostering innovation Latest months have been of great improvement and empowerment on...
  2. Is Debian listening to its users? For some time ago I’ve been pondering about this question....
  3. GSoC 2011: Compute Clusters Integration for Debian Development and Building Hi, been offline for a while and now after my...
]]>
Past Saturday April 28 I attended to the FLISOL 2012 Puno hosted at the “Casa de la Cultura” in downtown Puno. It was a great experience to deliver a talk after a while, the second speech since I enrolled into the UCSP’s Computer and Engineering Faculty to get my Computer Science BS degree.

is really a nice city with a great landscape for those like me who enjoy the sun, lakes, wonderful mountains and, as it’s name says, the “Andes plane” 3860m above sea level. Pretty high and very cold at night indeed but a lovely place. is short for “Latin American Free Software Installation Festival” that just aged 12 this year.

In my talk I wanted to describe the motivations behind the Free Software movement origins. I specially remarked the fact that the movement doesn’t look for technical excellence, even though this is something we thrive for, rather than preserving a common set of guidelines for protecting assets we do value such software and freedom. Later I discussed about how Debian have become a key actor in the free software ecosystem. Primarily as a huge software library, second as a building block for specialized distributions (such as Ubuntu) and third, but not the last, as a facilitator for new developments and contributions to the ecosystem, and I mentioned here the . Through such topics I explained what we do, why we do it and why they may want to contribute. There was a good reception from the audience and there were many questions on the subjects I elaborated on.

I was impressed to see many Debian users, some of them “power users”, and even tried to help one to configure his obscure Xorg settings on his laptop (didn’t succeed because lack of connectivity, will send him testing CDs eventually). I’d like to thank to the organization who made a great job. , nearly 30 students from plus local activists, started working in February collecting funds and managed to gather more than 330 attendees during the almost 10 hours of the event. Also special thanks for the coffee machine that was available!

]]>
http://blog.stone-head.org/flisol-puno/feed/ 1