Installing bcl2fastq from source without bonkers symlinking or copying libraries to other directories.

 

Between my final year of PhD (when I was also working as IBERS and IMAPS HPC SysAdmin) my life consisted of installing software into a module type system since HPC environments have multiple pieces of software installed, often the same software but with different versions and you fundamentally do not use a package manager for user software as it'll cause you a world of pain. One annoyance I always found was when you look for help when a piece of software fails to compile and you get an answer that boils down to;

yum install PACKAGE 

or

apt-get install PACKAGE

And this happened today. bcl2fastq, the Illumina software is all kinds of fun and games (USING MAKE FILES TO DEMULTIPLEX RAW DATA???? WHY?????). The install instructions are your usual ./configure/make....so you do your ./configure and all is well until you get the following error...

boost-1_44_0 installed successfully
-- Successfuly built boost 1.44.0 from the distribution package...
-- Check if the system is big endian
-- Searching 16 bit integer
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of unsigned short
-- Check size of unsigned short - done
-- Using unsigned short
-- Check if the system is big endian - little endian
-- Looking for floorf
-- Looking for floorf - found
-- Looking for round
-- Looking for round - found
-- Looking for roundf
-- Looking for roundf - found
-- Looking for powf
-- Looking for powf - found
-- Looking for erf
-- Looking for erf - found
-- Looking for erf
-- Looking for erf - found
-- Looking for erfc
-- Looking for erfc - found
-- Looking for erfc
-- Looking for erfc - found
CMake Error at cmake/cxxConfigure.cmake:74 (message):
  No support for gzip compression
Call Stack (most recent call first):
  c++/CMakeLists.txt:33 (include)


-- Configuring incomplete, errors occurred!
Couldn't configure the project:

/software/testing/bcl2fastq/1.8.4/build/bootstrap/bin/cmake -H"/software/testing/bcl2fastq/1.8.4/src/bcl2fastq/src" -B"/software/testing/bcl2fastq/1.8.4/build" -G"Unix Makefiles"  -DCASAVA_PREFIX:PATH=/software/testing/bcl2fastq/1.8.4/x86_64 -DCASAVA_EXEC_PREFIX:PATH= -DCMAKE_INSTALL_PREFIX:PATH=/software/testing/bcl2fastq/1.8.4/x86_64 -DCASAVA_BINDIR:PATH= -DCASAVA_LIBDIR:PATH= -DCASAVA_LIBEXECDIR:PATH= -DCASAVA_INCLUDEDIR:PATH= -DCASAVA_DATADIR:PATH= -DCASAVA_DOCDIR:PATH= -DCASAVA_MANDIR:PATH= -DCMAKE_BUILD_TYPE:STRING=RelWithDebInfo

Moving CMakeCache.txt to CMakeCache.txt.removed

My first thought was to quickly look to see if we have libz libraries installed on the HPC as a module, and they're not. Fair enough. I then wondered why there wasn't a libz library already installed by the OS, and there is but it seems to be different between the software node (a node dedicated to installing software so as not to annoy folk who are on the login node) and the compute nodes. So pointing to /lib64 would probably not work (it might if bcl2fastq is doing a static binary, I've not checked).

[[email protected]]$ locate libz
/lib64/libz.so.1
/lib64/libz.so.1.2.3
[[email protected]]$ locate libz
-bash: locate: command not found
[[email protected]]$ echo "grrr"
grrr
[[email protected]]$ ls -lath /lib64/libz*
lrwxrwxrwx 1 root root  13 Dec 13 14:09 /lib64/libz.so.1 -> libz.so.1.2.7
-rwxr-xr-x 1 root root 89K Nov  5 18:09 /lib64/libz.so.1.2.7

Okay so after a quick google I get the same old hacky responses;

https://biogist.wordpress.com/2012/10/23/casava-1-8-2-installation/

https://www.biostars.org/p/11202/

http://seqanswers.com/forums/showthread.php?t=11106

My next thought then was how about I just install libz from source. So,

[[email protected]]$ source git-1.8.1.2
[[email protected]]$ which git
[[email protected]]$ git clone https://github.com/madler/zlib.git

YES, EVEN GIT HAS VERSIONS!!! yum/apt isn't the answer to everything!

[[email protected] ]$ cd zlib/
[[email protected] ]$ ./configure
[[email protected] ]$ make -j4
[[email protected] ]$ ls -lath libz*s*
lrwxrwx--- 1 martin JIC_c1 14 Jan 27 16:11 libz.so.1 -> libz.so.1.2.11
lrwxrwx--- 1 martin JIC_c1 14 Jan 27 16:11 libz.so -> libz.so.1.2.11
-rwxrwx--x 1 martin JIC_c1 103K Jan 27 16:11 libz.so.1.2.11

And that's great. We now have our libraries compiled, just need to let my bash shell know where they are;

export LIBRARY_PATH=/software/testing/bcl2fastq/1.8.4/lib/zlib

and then back into my bcl2fastq build directory, rerun ./configure --prefix=/where/the/bins/go and it compiled.

All done without a yummy apt....it's Friday and I need to go home.

Backing up to AWS Glacier

 

I wanted a simple solution to backing up my webserver and the low cost of AWS S3 and Glacier is quite appealing. The solution I settled on was to use s3cmd (http://s3tools.org/s3cmd). After setting up a api key and then running;

s3cmd --configure

filling the information in. 

I created a new sql user that could read all tables for the backup and the backup user also has read access to the /var/www directory. Then this script runs nightly;

#!/bin/sh

DATE=`date +%Y-%m-%d`

#make db backup and compress
mysqldump -uuser-BUP -pPASSWORD --all-databases --result-file=/home/user-BUP/database_BUP/all_databases_$DATE.sql
gzip /home/user-BUP/database_BUP/all_databases_$DATE.sql

#transfer to S3
s3cmd put --storage-class=STANDARD_IA /home/user-BUP/database_BUP/all_databases_$DATE.sql.gz s3://serverbackup

#remove db dump as we will have loads of them
rm /home/user-BUP/database_BUP/all_databases_$DATE.sql.gz

#compress websites
tar cfzv /home/user-BUP/database_BUP/all_websites_$DATE.tar.gz /var/www/html

#transfer websites to S3
s3cmd put --storage-class=STANDARD_IA /home/user-BUP/database_BUP/all_websites_$DATE.tar.gz s3://serverbackup

#remove website compress too as we will have loads of them and these will be large
rm /home/user-BUP/database_BUP/all_websites_$DATE.tar.gz

So now that we can transfer data to S3, I added this as a cronjob;

0 00 * * * /home/mjv08/database_BUP/mysql_dump_script.sh > /home/mjv08/database_BUP/db_backup.log

With the data now backing up nightly to S3 I set up a LifeCycle rule to automagically transfer S3 data to Glacier after 14 days.

undefined

 

Finally after a few days (well after 14 days), a quick check to ensure that the transition worked and all looked great. 

undefined

XenServer. Things that have cropped up.

These notes are more for my notes than anything else. However someone might find them useful.

Cannot unplug/forget a storage resource.

XenCentre won't allow you to detach a storage resource. You see something like the following;

undefined

So, you break out the command line and do the following;

[[email protected] ~]# xe sr-list name-label=NFS\ ISO\ library 
uuid ( RO) : b3ef0d38-d702-791f-5ad5-62999131fc14
name-label ( RW): NFS ISO library
name-description ( RW): NFS ISO Library [192.168.10.1:/export/isoshare]
host ( RO): 
type ( RO): iso
content-type ( RO): iso

and then you get the pbd-list to find out which you need to unplug

[[email protected] ~]# xe pbd-list sr-uuid=b3ef0d38-d702-791f-5ad5-62999131fc14
uuid ( RO) : 68dae2a6-51df-9fe2-10f6-29fab837777c
host-uuid ( RO): 779ad98e-0dd8-4c01-8a09-cd00a785d10f
sr-uuid ( RO): b3ef0d38-d702-791f-5ad5-62999131fc14
device-config (MRO): type: nfs_iso; location: 192.168.10.1:/export/isoshare
currently-attached ( RO): true

uuid ( RO) : 474a1538-088f-2166-359b-0d5e5fc53037
host-uuid ( RO): 77740ea2-957a-487a-b53a-96e5b3aa9e33
sr-uuid ( RO): b3ef0d38-d702-791f-5ad5-62999131fc14
device-config (MRO): type: nfs_iso; location: 192.168.10.1:/export/isoshare
currently-attached ( RO): false

you attempt to unplug it....

[[email protected] ~]# xe pbd-unplug uuid=68dae2a6-51df-9fe2-10f6-29fab837777c
Error code: SR_BACKEND_FAILURE_202
Error parameters: , General backend error [opterr=Command os.stat(/var/run/sr-mount/b3ef0d38-d702-791f-5ad5-62999131fc14) failed (5): failed], 

But no, it doesn't work. This is because, somewhere, one of your hosts in your pool still has this mounted. So, just go around and find it and unmount. As you can see from below, it took a bit of brute force.

[[email protected] ~]# umount /var/run/sr-mount/b3ef0d38-d702-791f-5ad5-62999131fc14
umount.nfs: 192.168.10.1:/export/isoshare: not found / mounted or server not reachable
umount.nfs: 192.168.10.1:/export/isoshare: not found / mounted or server not reachable
[[email protected] ~]# umount -fl /var/run/sr-mount/b3ef0d38-d702-791f-5ad5-62999131fc14

Then, everything works as it should.

[[email protected] ~]# xe pbd-unplug uuid=68dae2a6-51df-9fe2-10f6-29fab837777c force=true
[[email protected] ~]# xe sr-forget uuid=b3ef0d38-d702-791f-5ad5-62999131fc14

Concrete5 CMS. Webdesign using a CMS. Notes to remind me.

These notes are more for my notes than anything else. However someone might find them useful.

Welsh Flag Internationalization package

I found that when creating a bilingual site, British English and Welsh, the Welsh flag icon was actually the UK Union Flag. Despite this, the correct welsh flag icon was located in ./images/flags/wales.png. So, taking a look at the mySQL table for the website database;

mysql> select * from MultilingualSections;
+-----+------------+--------+----------+
| cID | msLanguage | msIcon | msLocale |
+-----+------------+--------+----------+
| 145 | en_GB | GB | en_GB_GB |
| 146 | cy | GB | cy_GB |
+-----+------------+--------+----------+
2 rows in set (0.00 sec)

So, in order to resolve this, I simply changed the msIcon value for cy;

mysql> update MultilingualSections SET msIcon = 'wales' where msLanguage = 'cy'; 
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> select * from MultilingualSections;
+-----+------------+--------+----------+
| cID | msLanguage | msIcon | msLocale |
+-----+------------+--------+----------+
| 145 | en_GB | GB | en_GB_GB |
| 146 | cy | wales | cy_GB |
+-----+------------+--------+----------+
2 rows in set (0.00 sec)

Pretty URLS

In order to get rid of the index.php, you need to turn on 'Pretty URLS' in System and Settings > SEO and Statistics. This is especially important when wanting to use the internationalization package. Once you do this, you will probably find that your site doesn't work any more. To fix this, you need to edit;

#sudo vi /etc/httpd/conf/httpd.conf

and add the following..

<Directory "/var/www/html">'
Options +FollowSymLinks
AllowOverride all
Order deny,allow
Allow from all
RewriteEngine On
</Directory>

assuming that /var/www/html is the root directory of your concrete5 installation.

301 redirect

I know, this isn't anything to do with concrete5, but it had a weird consequence when it was not set. I had vhost set up in http.conf so that;

ServerName example.co.uk
ServerAlias www.example.co.uk

This has always worked fine, and avoids that embarrassment when one of your clients websites shows in preference to another one. However, what was happening on one of my concrete5 sites was that a custom font was not being found when accessing example.co.uk. To resolve this it is best to use 301 redirect, which is recommended practice. To do this, add the following to the .htaccess file in the root directory of your website(s);

RewriteEngine On
RewriteCond %{HTTP_HOST} ^example.com
RewriteRule (.*) http://www.example.com/$1 [R=301,L]

I also added the following to http.conf;

<Directory "/services/httpd/www.example.com/html">
Options +FollowSymLinks
AllowOverride all
Order deny,allow
Allow from all
RewriteEngine On
</Directory>

Installing on Solaris 11

What a pain this can be!! I assume that you have csw, apache and php5 already installed.

#unzip it whereever your apache server is pointing
unzip concrete5.zip

#sort permissions out;
chown -R nobody files packages config

#install x11
pkg install pkg:/x11/library/[email protected]

#you need to install the CSWgd library
/opt/csw/bin/pkgutil -i CSWgd

#install mysql and php interface for it
pkg install mysql-51
/opt/csw/bin/pkgutil -i CSWphp5-mysql

#add the following extensions to your php.ini file;
vi /etc/opt/csw/php5/php.ini
extension=gd.so
extension=mysql.so

#restart apache
/opt/csw/apache2/sbin/apachectl restart

With any luck, when you go to the site in your web browser, it will pass the tests and you should be okay to proceed with installation. So let's make the databases;

#if you haven't yet, set a root password
/usr/mysql/5.1/bin/mysqladmin -u root password NEWPASSWORD

#within mysql, set up database;
mysql -u root -p
mysql> create database c5db;

#add user...clearly change the password to something sensible
mysql> grant usage on *.* to [email protected] identified by 'c5password';

#give user permissions
mysql> grant all privileges on c5db.* to [email protected];

There we are, you can enter this information into your installation and c5 will do the rest...hopefully

Newer posts → Home ← Older posts