0
0

update to new generator

This commit is contained in:
Gibheer 2022-03-25 14:41:22 +01:00
parent 953b419485
commit d58ebeab52
79 changed files with 9824 additions and 1346 deletions

View File

@ -13,8 +13,11 @@ FILEMODE = 444
all: clean build
dev:
go run main.go --content-dir content --template-dir templates --static-dir static --listen "127.0.0.1:8080"
build:
hugo
go run main.go --content-dir content --template-dir templates --static-dir static --output-dir $(HTTPDIR)
clean:
-rm -r public/*

View File

@ -1,11 +0,0 @@
baseurl = ""
languageCode = "en-us"
title = "zero-knowledge"
theme = "zero"
SectionPagesMenu = "main"
Paginate = 12
disableRSS = false
[taxonomies]
author = "author"
tag = "tags"

View File

@ -1,11 +0,0 @@
+++
date = "2015-10-11T20:00:29+02:00"
draft = true
title = "about"
+++
## about zero-knowledge
This blog is the personal blog of Gibheer and Stormwind, where we write about
any topic from IT which keeps us working at the moment.

View File

@ -1,6 +1,7 @@
+++
title = "Gibheer"
date = "2015-11-04T12:23:00+02:00"
url = "/author/Gibheer"
+++
## about me
@ -17,8 +18,8 @@ learn from it and try it another way next time.
Most of the stuff I try in private are online either on github or my own git
server. What isn't code, I try to write down on the blog.
As for social media, I'm on [freenode](irc://irc.freenode.org/) under the nick
Gibheer.
As for social media, I'm on [libera.chat](ircs://irc.libera.chat:6697) with the nick
'Gibheer'.
## links

View File

@ -1,6 +1,7 @@
+++
title = "Stormwind"
date = "2015-11-04T12:40:00+02:00"
url = "/author/Stormwind"
+++
introduction

135
content/index.md Normal file
View File

@ -0,0 +1,135 @@
+++
title = "blog"
author = "gibheer"
url = "/"
template = "index.html"
+++
This blog is maintained by [Gibheer](/author/Gibheer) and [Stormwind](/author/Stormwind)
about various topics.
* [link summary 2016/07/08](post/127.md)
* [poudriere in jails with zfs](post/126.md)
* [gotchas with IPs and Jails](post/125.md)
* [link summary 2016/04/09](post/124.md)
* [json/curl to go](post/123.md)
* [configuring raids on freebsd](post/122.md)
* [fast application locks](post/121.md)
* [new blog engine](post/120.md)
* [ssh certificates part 2](post/119.md)
* [ssh certificates part 1](post/118.md)
* [S.M.A.R.T. values](post/117.md)
* [minimal nginx configuration](post/115.md)
* [pgstats - vmstat like stats for postgres](post/114.md)
* [setting zpool features](post/113.md)
* [using unbound and dnsmasq](post/112.md)
* [common table expressions in postgres](post/111.md)
* [range types in postgres](post/110.md)
* [learning the ansible way](post/109.md)
* [playing with go](post/108.md)
* [no cfengine anymore](post/107.md)
* [scan to samba share with HP Officejet pro 8600](post/106.md)
* [\[cfengine\] log to syslog](post/105.md)
* [overhaul of the blog](post/104.md)
* [block mails for unknown users](post/103.md)
* [choosing a firewall on freebsd](post/102.md)
* [use dovecot to store mails with lmtp](post/100.md)
* [grub can't read zpool](post/99.md)
* [sysidcfg replacement on omnios](post/98.md)
* [filter program logs in freebsd syslog](post/97.md)
* [moving a zone between zpools](post/96.md)
* [compile errors on omnios with llvm](post/95.md)
* [inner and natural joins](post/94.md)
* [release of zero 0.1.0](post/93.md)
* [building a multi instance postgres systemd service](post/92.md)
* [automatic locking of the screen](post/91.md)
* [rotate log files with logadm](post/90.md)
* [Solaris SMF on linux with systemd](post/89.md)
* [create encrypted password for postgresql](post/88.md)
* [extend PATH in Makefile](post/87.md)
* [touchpad keeps scrolling](post/86.md)
* [Schwarze Seelen brauchen bunte Socken 2012.1](post/85.md)
* [Backups with ZFS over the wire](post/84.md)
* [the Illumos eco system](post/83.md)
* [archlinux + rubygems = gem executables will not run](post/82.md)
* [Lustige Gehversuche mit... verschlüsselten Festplatten](post/81.md)
* [find cycle detected](post/80.md)
* [openindiana - getting rubinius to work](post/79.md)
* [openindiana - curl CA failure](post/78.md)
* [openindiana - set up ssh with kerberos authentication](post/77.md)
* [great resource to ipfilter](post/76.md)
* [openindiana - ntpd does not start](post/75.md)
* [openindiana - how to configure a zone](post/74.md)
* [openindiana - how to get routing working](post/73.md)
* [How to use sysidcfg for zone deployment](post/72.md)
* [set environment variables in smf manifests](post/71.md)
* [get pfexec back in Solaris](post/70.md)
* [Solaris - a new way to 'ifconfig'](post/69.md)
* [OpenIndiana 151a released](post/68.md)
* [PostgreSQL 9.1 was released](post/67.md)
* [SmartOS - hype and a demo iso](post/66.md)
* [SmartOS - a new Solaris](post/65.md)
* [neues Lebenszeichen - neuer Blog](post/64.md)
* [Accesslogs in die Datenbank](post/63.md)
* [Schwarze Seelen brauchen bunte Socken - Teil 3](post/62.md)
* [Technik hinter dem neuen Blog](post/61.md)
* [jede Menge Umzuege](post/60.md)
* [DTrace fuer den Linuxlator in FreeBSD](post/59.md)
* [daily zfs snapshots](post/58.md)
* [Dokumentation in Textile schreiben](post/57.md)
* [Shells in anderen Sprachen](post/56.md)
* [ZFS Versionen](post/55.md)
* [Spielwahn mit Wasser](post/54.md)
* [FreeBSD Status Report Juli - September 2010](post/53.md)
* [Spass mit test-driven development](post/52.md)
* [dtrace userland in FreeBSD head](post/51.md)
* [Alle Tabellen einer DB loeschen mit PostgreSQL 9.0](post/50.md)
* [Shellbefehle im Vim ausfuehren](post/49.md)
* [zero-knowledge mit IPv6 Teil 2](post/48.md)
* [[Rubyconf 2009] Worst Ideas Ever](post/47.md)
* [Nachfolger von Tex](post/46.md)
* [Linux und Windows im Auto](post/45.md)
* [zero-knowledge jetzt auch per IPv6](post/44.md)
* [Der Drackenzackenschal](post/43.md)
* [Kalender auf der Konsole](post/42.md)
* [NetBeans 6.9 released](post/41.md)
* [Das Wollefest in Nierstein](post/40.md)
* [PostgreSQL - mehrere Werte aus einer Funktion](post/39.md)
* [Schwarze Seelen brauchen bunte Socken - Teil 2](post/38.md)
* [Serverumzug vollendet](post/37.md)
* [MySQL kann Datensaetze \"zerreissen\"](post/36.md)
* [Umzug mit OpenSolaris 20x0.xx](post/35.md)
* [Blub gibt es ab sofort auch fuer unterwegs](post/34.md)
* [OpenSolaris Zones mit statischer IP](post/33.md)
* [Blog nicht da](post/32.md)
* [gefaehrliches Spiel fuer das n900](post/31.md)
* [neuer CLI-Client fuer XMMS2](post/30.md)
* [Claws Mail laeuft auf OpenSolaris](post/29.md)
* [publisher contains only packages from other publisher](post/28.md)
* [PostgreSQL 8.4 in OpenSolaris](post/27.md)
* [mit PHP Mailadressen validieren](post/26.md)
* [Lustige Gehversuche mit ...](post/25.md)
* [Performance, Programme und viel Musik](post/24.md)
* [von Linux zu OpenSolaris](post/23.md)
* [Gibheers zsh-config](post/22.md)
* [Crossbow mit Solaris Containern](post/21.md)
* [Lustige Gehversuche mit Gentoo/FreeBSD](post/20.md)
* [Heidelbeertigerarmstulpen](post/19.md)
* [OpenVPN unter OpenSolaris](post/18.md)
* [OpenSolaris Wiki](post/17.md)
* [OpenSolaris ohne Reboot updaten](post/16.md)
* [einzelne Pakete unter OpenSolaris updaten](post/15.md)
* [Rails mit Problemen unter OpenSolaris](post/14.md)
* [Wie wenig braucht OpenSolaris?](post/13.md)
* [das eklige Gesicht XMLs](post/12.md)
* [Dokumentation fuer (Open)Solaris](post/11.md)
* [Woche der Updates](post/10.md)
* [Was ist XMMS2?](post/9.md)
* [Rack und XMMS2](post/8.md)
* [Webserver unter Ruby](post/7.md)
* [Symbole in Ruby](post/6.md)
* [Schwarze Seelen brauchen bunte Socken](post/5.md)
* [Zero-knowledge spielt wieder Icewars](post/4.md)
* [Serendipity als Blog?](post/3.md)
* [Indizes statt Tabellen](post/2.md)
* [zero-knowledge ohne Forum](post/1.md)

View File

@ -36,8 +36,8 @@ naja, jetzt brauche ich unbedingt ganz viel Wolle.
Hier nochmal ein Dank an Nathalie und ihre Mutter, die beide den
Workshop betreut haben. Das hat echt Spaß gemacht und ich denke ich
werde auch in Zukunft noch ganz viel zumspinnen. :)\
!(float\_right)/images/wolle4.jpg(4 Knaeule bunte Wolle vom
Wolldrachen)!\
![4 Knaeule bunte Wolle vom Wolldrachen](/static/pics/wolle4.jpg)
Desweiteren muss ich erzählen, dass der
[Wolldrache](http://drachenwolle.de/) auch hier mit ihrem Stand zu
finden war. Und das gemeinerweise direkt am Anfang des Festplatzes.

View File

@ -5,10 +5,10 @@ author = "Gibheer"
draft = false
+++
Nachdem es hier lange Still war, gibt es mal wieder ein Update. In der zwischenzeit haben wir den Blog auf eine eigene Software umgezogen, weil uns Jekyll nicht gepasst hat. Fuer mich war es zwar einfach von der Konsole aus die Beitraege zu verfassen, allerdings fehlte die Moeglichkeit auch mal von unterwegs "schnell" etwas zu verfassen.
Nun haben wir eine eigene Blogsoftware (die auch auf github liegt). Mal schauen wie gut wir damit zurecht kommen. Im Gegensatz zu jekyll generieren wir keine statischen Files, sondern der Content wird in der Datenbank gespeichert und bei jedem Request neu generiert. Das ist im Moment noch etwas langsam, aber da werd ich noch was bauen, damit das besser passt.
Es wird noch eine Kommentarfunktion hinzukommen und es ist geplant unterschiedliche Typen von Blogposts machen zu koennen. Ersteres wird wahrscheinlich recht einfach werden, letztes ist im Moment nur eine grobe Idee in meinem Kopf.
Nachdem es hier lange Still war, gibt es mal wieder ein Update. In der zwischenzeit haben wir den Blog auf eine eigene Software umgezogen, weil uns Jekyll nicht gepasst hat. Fuer mich war es zwar einfach von der Konsole aus die Beitraege zu verfassen, allerdings fehlte die Moeglichkeit auch mal von unterwegs "schnell" etwas zu verfassen.
Nun haben wir eine eigene Blogsoftware (die auch auf github liegt). Mal schauen wie gut wir damit zurecht kommen. Im Gegensatz zu jekyll generieren wir keine statischen Files, sondern der Content wird in der Datenbank gespeichert und bei jedem Request neu generiert. Das ist im Moment noch etwas langsam, aber da werd ich noch was bauen, damit das besser passt.
Es wird noch eine Kommentarfunktion hinzukommen und es ist geplant unterschiedliche Typen von Blogposts machen zu koennen. Ersteres wird wahrscheinlich recht einfach werden, letztes ist im Moment nur eine grobe Idee in meinem Kopf.
Es ist auf jeden Fall ein nettes Experiment und mal schauen, wie es sich in Zukunft weiter entwickeln wird.

View File

@ -5,16 +5,16 @@ author = "Gibheer"
draft = false
+++
Some minutes ago I saw on [hacker news](http://news.ycombinator.com/) the following line [Joyent Open Sources SmartOS: Zones, ZFS, DTrace and KVM (smartos.org)](http://smartos.org/).
Who is behind SmartOS?
======================
What does that mean? I took a look and it seems, that Joyent, the company behind [node.js](http://nodejs.org/), has released their distribution of [Illumos](https://www.illumos.org/).
After the merge of sun and oracle, OpenSolaris as a project was closed in favor of Solaris11. As OpenSolaris was OpenSource the project Illumos emerged from the remains of OpenSolaris, but there was no release of the Illumos kernel in any project till now.
So what is different?
=====================
The first things I saw on their page are dtrace zfs and zones. So it's a standard solaris. But there is more: *KVM*! If the existence of zones means also, that it has crossbow and resource limits, then it would be absolutely gorgeous! It would be possible to build the core services on solaris zones and on top of that multiple dev or production machines with linux, windows or whatever you want.
Some minutes ago I saw on [hacker news](http://news.ycombinator.com/) the following line [Joyent Open Sources SmartOS: Zones, ZFS, DTrace and KVM (smartos.org)](http://smartos.org/).
Who is behind SmartOS?
======================
What does that mean? I took a look and it seems, that Joyent, the company behind [node.js](http://nodejs.org/), has released their distribution of [Illumos](https://www.illumos.org/).
After the merge of sun and oracle, OpenSolaris as a project was closed in favor of Solaris11. As OpenSolaris was OpenSource the project Illumos emerged from the remains of OpenSolaris, but there was no release of the Illumos kernel in any project till now.
So what is different?
=====================
The first things I saw on their page are dtrace zfs and zones. So it's a standard solaris. But there is more: *KVM*! If the existence of zones means also, that it has crossbow and resource limits, then it would be absolutely gorgeous! It would be possible to build the core services on solaris zones and on top of that multiple dev or production machines with linux, windows or whatever you want.
I will test it first in a virtual box to see, how stable and usable it really is, as there is no documentation on the website yet. After my test I will report back.

View File

@ -5,12 +5,12 @@ author = "Gibheer"
draft = false
+++
So, there is this new distribution of Illumos, [SmartOS](http://smartos.org) but it's not as ready as they claimed. Sure, there is an ISO but that ISO has no installer and no package manager. So one of the crucial part for using SmartOS is missing.
As Joyent wrote on the [blog](http://blog.smartos.org) they are working on a wiki and the documentation and this night, they showed the [wiki](http://wiki.smartos.org). Until now there is only a documentation on how to use the usb image which got released the same time. But i think, that there will be much more coming.
At the same time I found out, that kvm was released into the Illumos core too, so that kvm will be available with every other distribution too. And [OpenIndiana](http://openindiana.org) said, they want it in their 151 release too. 151 was planned to be released some months ago, so let's see, how fast they can get that out to the users.
Joyent too should release a real distribution as fast as they can, because they created a large hype for SmartOS, but have nothing to use it in production. The ports are missing and an upgrade path is missing too. They wrote, that they are already using it in production, so why did they not release that?
So, there is this new distribution of Illumos, [SmartOS](http://smartos.org) but it's not as ready as they claimed. Sure, there is an ISO but that ISO has no installer and no package manager. So one of the crucial part for using SmartOS is missing.
As Joyent wrote on the [blog](http://blog.smartos.org) they are working on a wiki and the documentation and this night, they showed the [wiki](http://wiki.smartos.org). Until now there is only a documentation on how to use the usb image which got released the same time. But i think, that there will be much more coming.
At the same time I found out, that kvm was released into the Illumos core too, so that kvm will be available with every other distribution too. And [OpenIndiana](http://openindiana.org) said, they want it in their 151 release too. 151 was planned to be released some months ago, so let's see, how fast they can get that out to the users.
Joyent too should release a real distribution as fast as they can, because they created a large hype for SmartOS, but have nothing to use it in production. The ports are missing and an upgrade path is missing too. They wrote, that they are already using it in production, so why did they not release that?
Illumos, OpenIndiana and Joyent with SmartOS are missing a big chance here to make that fork of OpenSolaris popular. They created much traction, but without having something, which could be used in production. We will see, how fast they can react. Hopefully, the release of either OpenIndiana or SmartOS, will be useable and stable in production. Then, they have a chance of getting me as an user.

View File

@ -5,6 +5,6 @@ author = "Gibheer"
draft = false
+++
Yesterday PostgreSQL 9.1 was released. It has some neat features included, like writable common table expressions, synchronized replication and unlogged tables. Apart from that, some performance tuning was included as well.
Yesterday PostgreSQL 9.1 was released. It has some neat features included, like writable common table expressions, synchronized replication and unlogged tables. Apart from that, some performance tuning was included as well.
If you are interested, take a look yourself at the [release notes](http://www.postgresql.org/about/news.1349)

View File

@ -5,10 +5,10 @@ author = "Gibheer"
draft = false
+++
After the release of [PostgreSQL 9.1](http://www.postgresql.org/about/news.1349), today another great open source project released a new version - [OpenIndiana](http://wiki.openindiana.org/oi/oi_151a+Release+Notes).
OpenIndiana is based on a fork of OpenSolaris, named [Illumos](http://illumos.org). It was announced in august 2010. OpenIndiana has evolved since that time and got a stable release 148 and today 151a. That release is very solid and got one thing, which Solaris 11 has and most likely will never have: *KVM*.
So from today you get a Solaris fork with crossbow, resource containers, zones and the kernel virtual machine, converted from linux to Illumos from the developers of [Joyent](http://joyent.com). They built there own distribution, [SmartOS](http://smartos.org), which is a bootable OS for managing a cloud like setup but without the zones.
After the release of [PostgreSQL 9.1](http://www.postgresql.org/about/news.1349), today another great open source project released a new version - [OpenIndiana](http://wiki.openindiana.org/oi/oi_151a+Release+Notes).
OpenIndiana is based on a fork of OpenSolaris, named [Illumos](http://illumos.org). It was announced in august 2010. OpenIndiana has evolved since that time and got a stable release 148 and today 151a. That release is very solid and got one thing, which Solaris 11 has and most likely will never have: *KVM*.
So from today you get a Solaris fork with crossbow, resource containers, zones and the kernel virtual machine, converted from linux to Illumos from the developers of [Joyent](http://joyent.com). They built there own distribution, [SmartOS](http://smartos.org), which is a bootable OS for managing a cloud like setup but without the zones.
So if you have a large Infrastructure and want to seperate some programs from each other or have some old infrastructure, try OpenIndiana and it's zones and kvm.

View File

@ -5,8 +5,8 @@ author = "Gibheer"
draft = true
+++
kleinere Hilfestellungen zu ipadm
http://192.9.164.72/bin/view/Project+brussels/ifconfig_ipadm_feature_mapping
http://arc.opensolaris.org/caselog/PSARC/2010/080/materials/ipadm.1m.txt
kleinere Hilfestellungen zu ipadm
http://192.9.164.72/bin/view/Project+brussels/ifconfig_ipadm_feature_mapping
http://arc.opensolaris.org/caselog/PSARC/2010/080/materials/ipadm.1m.txt
http://blog.allanglesit.com/2011/03/solaris-11-network-configuration-basics/

View File

@ -5,13 +5,13 @@ author = "Gibheer"
draft = false
+++
If you tried Solaris 11 or OpenIndiana in a fresh installation, you may have noticed, that pfexec may not work the way you are used to. I asked in #openindiana on `irc.freenode.org` and I was told, that the behavior was changed. OpenSolaris was used to have an `Primary Administrator` profile which got assigned to the first account created on the installation. The problem with that is the same as on Windows - you are doing everything with the administrator or root account. To avoid that, sudo was introduced, which needs the password of your account with the default settings. What both tools are very different at what they do and at what they are good at. So it's up to the administrator to define secure roles where appropriate and use sudo rules for the parts, which have to be more secured.
If you want back the old behavior, these two steps should be enough. But keep in mind, that it is important that you secure your system, to avoid misuse.
* there should be line like the following in `/etc/security/prof_attr`
`Primary Administrator:::Can perform all administrative tasks:auths=solaris.*,solaris.grant;help=RtPriAdmin.html`
* if there is, then you can add that profile to your user with
`usermod -P'Primary Administrator` <username>
If you tried Solaris 11 or OpenIndiana in a fresh installation, you may have noticed, that pfexec may not work the way you are used to. I asked in #openindiana on `irc.freenode.org` and I was told, that the behavior was changed. OpenSolaris was used to have an `Primary Administrator` profile which got assigned to the first account created on the installation. The problem with that is the same as on Windows - you are doing everything with the administrator or root account. To avoid that, sudo was introduced, which needs the password of your account with the default settings. What both tools are very different at what they do and at what they are good at. So it's up to the administrator to define secure roles where appropriate and use sudo rules for the parts, which have to be more secured.
If you want back the old behavior, these two steps should be enough. But keep in mind, that it is important that you secure your system, to avoid misuse.
* there should be line like the following in `/etc/security/prof_attr`
`Primary Administrator:::Can perform all administrative tasks:auths=solaris.*,solaris.grant;help=RtPriAdmin.html`
* if there is, then you can add that profile to your user with
`usermod -P'Primary Administrator` <username>
It is possible to combine these two mechanics too. You could build a zone to ssh into the box with a key and from there, ssh with sudo and a password into the internal systems.

View File

@ -5,16 +5,16 @@ author = "Gibheer"
draft = false
+++
If you are in the need to set an environment variable for an smf service, you are looking for envvar. It get's set in the `service` scope or in the `exec_method` scope. Here is a small example, how it's used.
```
<exec_method type="method" name="start" exec="/bin/bash">
<method_context>
<method_environment>
<envvar name="FOO" value="bar" />
</method_environment>
</method_context>
</exec_method>
```
If you are in the need to set an environment variable for an smf service, you are looking for envvar. It get's set in the `service` scope or in the `exec_method` scope. Here is a small example, how it's used.
```
<exec_method type="method" name="start" exec="/bin/bash">
<method_context>
<method_environment>
<envvar name="FOO" value="bar" />
</method_environment>
</method_context>
</exec_method>
```
This example sets the environment variable `FOO` to bar. This is espacially useful, when you have to modify `PATH` or `LD_LIBRARY_PATH`. Just don't forget, that you did it.

View File

@ -5,28 +5,28 @@ author = "Gibheer"
draft = false
+++
This is mostly for myself that I can remember how to use the least documented feature of Solaris and openindiana - the `sysidcfg` files.
These files help deploying new zones faster, as you don't have to configure them by hand afterwards. But what is the syntax and how can you use them?
Here is an example file
name_service=NONE
# name_service=DNS {domain_name=<your_domain> name_server=<your_dns_server>}
nfs4_domain=dynamic
timezone=Europe/Stockholm
terminal=xterms
root_password=<crypted_password>
security_policy=NONE
network_interface=<interface1> {primary hostname=<hostname> default_route=<route_ip> ip_address=<if_ip> netmask=<if_netmask> protocol_ipv6=yes}
network_interface=<interface2> {hostname=<hostname> ip_address=<if_ip> netmask=<if_netmask> protocol_ipv6=yes default_route=NONE}`
The most important thing first: you don't need system_locale after openindiana 151 anymore. If you have it in your config, even with C, delete it or else the setup will not work!
If you don't have a dns record for your zone yet, set the @name_service@ to NONE. If you have already a record set, use the commented syntax.
The next interesting setting is root_password. Here you don't input the password in cleartext but crypted. I wrote a little script to generate this string. You can find the code [here](https://github.com/Gibheer/zero-pwcrypter).
The network_interface part is pretty easy, if you take these lines as a dummy. If you have only one interface, you can name the first interface PRIMARY. That way, you have a bit less to write.
This is mostly for myself that I can remember how to use the least documented feature of Solaris and openindiana - the `sysidcfg` files.
These files help deploying new zones faster, as you don't have to configure them by hand afterwards. But what is the syntax and how can you use them?
Here is an example file
name_service=NONE
# name_service=DNS {domain_name=<your_domain> name_server=<your_dns_server>}
nfs4_domain=dynamic
timezone=Europe/Stockholm
terminal=xterms
root_password=<crypted_password>
security_policy=NONE
network_interface=<interface1> {primary hostname=<hostname> default_route=<route_ip> ip_address=<if_ip> netmask=<if_netmask> protocol_ipv6=yes}
network_interface=<interface2> {hostname=<hostname> ip_address=<if_ip> netmask=<if_netmask> protocol_ipv6=yes default_route=NONE}`
The most important thing first: you don't need system_locale after openindiana 151 anymore. If you have it in your config, even with C, delete it or else the setup will not work!
If you don't have a dns record for your zone yet, set the @name_service@ to NONE. If you have already a record set, use the commented syntax.
The next interesting setting is root_password. Here you don't input the password in cleartext but crypted. I wrote a little script to generate this string. You can find the code [here](https://github.com/Gibheer/zero-pwcrypter).
The network_interface part is pretty easy, if you take these lines as a dummy. If you have only one interface, you can name the first interface PRIMARY. That way, you have a bit less to write.
That's all so far. I will update this post, when I have figured out, what to fill into nfs4_domain and security_policy.

View File

@ -5,62 +5,62 @@ author = "Gibheer"
draft = false
+++
This time, we are going to get routing working on the global zone for our other zones. You can replace the global zone with another zone too, as the setup is the same.
What's needed?
==============
First, we need to install ipfilter, if it isn't already installed. To do that, just invoke
# pkg install ipfilter
This will install the package filter and NAT engine. Latter is the part, we want to use now.
We will asume, that the global zone has to interfaces with the following setup
* bge0 -> 192.168.4.1/24
* bge1 -> 192.168.5.1/24
configure ipnat
===============
With `ipnat` installed, we need to write a small configuration. For this example, we set up routing for every machine in the subnet.
For that, open the file `/etc/ipf/ipnat.conf` and write the following lines:
map bge0 192.168.5.0/24 -> 0/32 portmap tcp/udp auto
map bge0 192.168.5.0/24 -> 0/32
These two lines say, that all packages from the subnet to the rest shall be relabeled and forwarded.
After that, all we need to do is enable the ipfilter and the routing deamons with the following commands.
# svcadm enable ipfilter
# routeadm -e ipv4-forwarding
# routeadm -e ipv4-routing
# routeadm -u
The last command checks if all deamons are running according to the settings. To see, which settings are set and what the deamons are doing, run the `routeadm` command without any arguments.
configure the zone
==================
Now we fire up the zone to test, if we can get anywhere near routing. In our case, the zone only has one interface, so that it detects the router itself per icmp.
We can prove that very easy with
# netstat -rn
The default gateway should point to our global zone. To make a last test, you can ping an ip in another subnet. If the global zone says, this host is alive, the zone should do too.
A good IP to test is 8.8.8.8, as it is really easy to remember.
That was all. Have fun with your access
links and hints
===============
You can get some more documentation to ipfilter and routing in the man pages of ipnat, ipf and routeadm. Some example rule sets for ipf can be found in `/usr/share/ipfilter/examples/nat.eg`.
* [a rough setup of routing](http://blog.kevinvandervlist.nl/2011/06/openindiana-zone-with-nat/)
This time, we are going to get routing working on the global zone for our other zones. You can replace the global zone with another zone too, as the setup is the same.
What's needed?
==============
First, we need to install ipfilter, if it isn't already installed. To do that, just invoke
# pkg install ipfilter
This will install the package filter and NAT engine. Latter is the part, we want to use now.
We will asume, that the global zone has to interfaces with the following setup
* bge0 -> 192.168.4.1/24
* bge1 -> 192.168.5.1/24
configure ipnat
===============
With `ipnat` installed, we need to write a small configuration. For this example, we set up routing for every machine in the subnet.
For that, open the file `/etc/ipf/ipnat.conf` and write the following lines:
map bge0 192.168.5.0/24 -> 0/32 portmap tcp/udp auto
map bge0 192.168.5.0/24 -> 0/32
These two lines say, that all packages from the subnet to the rest shall be relabeled and forwarded.
After that, all we need to do is enable the ipfilter and the routing deamons with the following commands.
# svcadm enable ipfilter
# routeadm -e ipv4-forwarding
# routeadm -e ipv4-routing
# routeadm -u
The last command checks if all deamons are running according to the settings. To see, which settings are set and what the deamons are doing, run the `routeadm` command without any arguments.
configure the zone
==================
Now we fire up the zone to test, if we can get anywhere near routing. In our case, the zone only has one interface, so that it detects the router itself per icmp.
We can prove that very easy with
# netstat -rn
The default gateway should point to our global zone. To make a last test, you can ping an ip in another subnet. If the global zone says, this host is alive, the zone should do too.
A good IP to test is 8.8.8.8, as it is really easy to remember.
That was all. Have fun with your access
links and hints
===============
You can get some more documentation to ipfilter and routing in the man pages of ipnat, ipf and routeadm. Some example rule sets for ipf can be found in `/usr/share/ipfilter/examples/nat.eg`.
* [a rough setup of routing](http://blog.kevinvandervlist.nl/2011/06/openindiana-zone-with-nat/)
* [NAT on solaris](http://www.rite-group.com/rich/solaris_nat.html)

View File

@ -5,93 +5,93 @@ author = "Gibheer"
draft = false
+++
In this short post, we will get a container running on a openindiana host. We will do some things in crossbow, but of the following stuff is just configuring the zone. At the end of this blog post, you will find some links to related pages.
some preparations
=================
Make sure, that you have a free vnic created with dladm to use in the zone or else, we will have no network available. Further, we need a place on the filesystem, where our zone can be created. We need 500MB to 1.5GB of free space.
writing a zone configuration
============================
In the first step, we have to write a zone configuration. You can use zonecfg directly, but it's better to write it into a textfile and let zonecfg read that file. That way, you can check the configuration into a vcs of your choice.
The config should look like this.
create -b
set zonepath=/zones/zone1
set ip-type=exclusive
set autoboot=false
add net
set physical=zone1
end
commit
With this configuration, we build a zone, which get's saved in `/zones`. `/zones` has to be a zfs partition or else the zone can not be created.
The sixth line sets the network device for the zone to the vnic `zone1`.
Now we feed the file to zonecfg and let it create *zone1*.
# zonecfg -z zone1 -f zone1.conf
installation of the zone
========================
The next step is to install the zone with the command:
# zoneadm -z zone1 install
or clone it from a template with
# zoneadm -z zone1 clone template_name
Now we have to wait a bit and can write the next configuration file.
writing a sysidcfg
==================
I wrote a rough post about the [sysidcfg](http://zero-knowledge.org/post/72) already, so take a look there, if you are interested in further details.
For this example, we use the following content.
name_service=NONE
nfs4_domain=dynamic
terminal=xterms
# the password is foobar
root_password=0WMBUdFzAu6qU
security_policy=NONE
network_interface=zone1 {
primary
hostname=zone1
default_route=NONE
ip_address=192.168.5.3
netmask=255.255.255.0
protocol_ipv6=no
}
booting the zone
================
When the installation process has ended, copy the file to `/zones/zone1/root/etc/sysidcfg`. This way, the zone can read the file on the first boot and set most of the stuff.
# zoneadm -z zone1 boot
To check if everything gets configured, log into the zone and check the output.
# zlogin -e ! -C zone1
It will take some time until the zone is ready to use, but it should not ask for further details. When the prompt shows, the configuration completed.
Now you can login into the zone and make further adjustments. Some topics will get their own blog entries here, so take a look at the other entries for help too.
links
=====
Here are some links for further details to this topic:
* [crossbow example from c0t0d0s0](http://www.c0t0d0s0.org/archives/5355-Upcoming-Solaris-Features-Crossbow-Part-1-Virtualisation.html)
In this short post, we will get a container running on a openindiana host. We will do some things in crossbow, but of the following stuff is just configuring the zone. At the end of this blog post, you will find some links to related pages.
some preparations
=================
Make sure, that you have a free vnic created with dladm to use in the zone or else, we will have no network available. Further, we need a place on the filesystem, where our zone can be created. We need 500MB to 1.5GB of free space.
writing a zone configuration
============================
In the first step, we have to write a zone configuration. You can use zonecfg directly, but it's better to write it into a textfile and let zonecfg read that file. That way, you can check the configuration into a vcs of your choice.
The config should look like this.
create -b
set zonepath=/zones/zone1
set ip-type=exclusive
set autoboot=false
add net
set physical=zone1
end
commit
With this configuration, we build a zone, which get's saved in `/zones`. `/zones` has to be a zfs partition or else the zone can not be created.
The sixth line sets the network device for the zone to the vnic `zone1`.
Now we feed the file to zonecfg and let it create *zone1*.
# zonecfg -z zone1 -f zone1.conf
installation of the zone
========================
The next step is to install the zone with the command:
# zoneadm -z zone1 install
or clone it from a template with
# zoneadm -z zone1 clone template_name
Now we have to wait a bit and can write the next configuration file.
writing a sysidcfg
==================
I wrote a rough post about the [sysidcfg](http://zero-knowledge.org/post/72) already, so take a look there, if you are interested in further details.
For this example, we use the following content.
name_service=NONE
nfs4_domain=dynamic
terminal=xterms
# the password is foobar
root_password=0WMBUdFzAu6qU
security_policy=NONE
network_interface=zone1 {
primary
hostname=zone1
default_route=NONE
ip_address=192.168.5.3
netmask=255.255.255.0
protocol_ipv6=no
}
booting the zone
================
When the installation process has ended, copy the file to `/zones/zone1/root/etc/sysidcfg`. This way, the zone can read the file on the first boot and set most of the stuff.
# zoneadm -z zone1 boot
To check if everything gets configured, log into the zone and check the output.
# zlogin -e ! -C zone1
It will take some time until the zone is ready to use, but it should not ask for further details. When the prompt shows, the configuration completed.
Now you can login into the zone and make further adjustments. Some topics will get their own blog entries here, so take a look at the other entries for help too.
links
=====
Here are some links for further details to this topic:
* [crossbow example from c0t0d0s0](http://www.c0t0d0s0.org/archives/5355-Upcoming-Solaris-Features-Crossbow-Part-1-Virtualisation.html)
* [howto sysidcfg](http://zero-knowledge.org/post/72)

View File

@ -5,8 +5,8 @@ author = "Gibheer"
draft = true
+++
Here comes a small hint for everybody else, who wants to run a ntp server in a zone: It does not work!
The reason for that is, that ntp needs access to the time facility of the kernel. But only global zones are allowed to access this part of the kernel. But don't worry, you don't need a ntp client on the zones, as they get their time information from the global zone.
Here comes a small hint for everybody else, who wants to run a ntp server in a zone: It does not work!
The reason for that is, that ntp needs access to the time facility of the kernel. But only global zones are allowed to access this part of the kernel. But don't worry, you don't need a ntp client on the zones, as they get their time information from the global zone.
That cost me about 4 hours to find out. I hope, this could save you some time.

View File

@ -5,133 +5,133 @@ author = "Gibheer"
draft = false
+++
This time, we will build a base kerberos setup. At the end, you will be able to login into another machine using kerberos only.
You need the following things, to make kerberos work:
* a working dns server
* 2 servers
I will explain this setup on an openindiana system with 2 zones. `kerberosp1` will be my kerberos machine and `sshp1` will be my ssh server with kerberos support.
setup of kerberos
=================
The setup of kerberos was pretty easy, after reading 3 tutorials about it. The essential part here is to decide, how the realm and the admin account should be called.
To start the setup, call `kdcmgr`. At first, it asks your realm, which you should name like your domain.
After that, you have to generate an admin principal.A principal is like an account for a user or admin. But it's also used for services. I named mine `kerberosp1/admin`. Give it a safe password and you are done.
Now you should have an populated `/etc/krb5/` directory. Open the file `kdc.conf` in that directory and search for `max_life`. It was set to 8 hours for me, which was too long. Adjust the value to 4h or 16h, like you want. I did the same with `max_renewable_life`.
Edit: You should add the following option in the realms section to your realm.
kpasswd_protocol = SET_CHANGE
Kerberos uses a separate protocol for changing the password of principals. A RPC like protocol is used in the solaris version and microsoft has another one too. So the only option compatible on all is `SET_CHANGE`. But to make things worse, the solaris default does not even work in an internal network. So just add this entry and save some stress from trying to find out, why this is not working.
setting up some accounts
========================
To use the kerberos service, check first, if the kdc is running and start it, if it's not. For openindiana, the check is
`svcs krb5kdc`
which should return online.
After that, as root start the kerberos shell with `kadmin.local`. This is a management shell to create, delete and modify principals.
Here we are going to create some policies. With these, we can set some minimal standards, like the minimum password length.
I created three policies. An `admin`, `user` and a `service` policy. These got the following settings:
* admin
* minlength 8
* minclasses 3
* user
* minlength 8
* minclasses 2
* service
* minlength 12
* minclasses 4
This sets some password limitations for every principal group I have. `minclasses` is used for different types of characters. There are lower case, upper case, numbers, punctation and other characters.
The create a new policy use the command `addpol` or `add_policy` with `-minlength` and `-minclasses`. You can simply type the command to get some help or read the man page.
After creating the policies, we have to create some principals. First, we should create one for ourselves. You can do this with the command `addprinc` or `add_principal`. Give it a policy with the argument `-policy` and a name. You will have to input a password for this principal according to the policies.
You can use this scheme to create user accounts too. For that, you can generate a password for them with the program `pwgen`. It's pretty helpful and can generate pretty complex passwords, so that should be best.
Now we need a principal for our ssh server. The name of this principal should be `host/name_of_service.your.domain.name`, so in my case, it is `host/sshp1.prod.lan`. But I did not want to generate any password and added the argument `-randkey` which generates a password according to the policies we set.
Now we have to export the key of the last principal into a keytab file, that can be read by the service, which wants to use it. This is done with the command `ktadd` like this
`ktadd -k /etc/krb5.keytab host/sshp1.prod.lan`
This generates our file in /etc/krb5.keytab. Copy this file into the kerberos directory (on openindiana it's `/etc/krb5/`) and delete the one on the kerberos host. This is important, as another execution of ktadd will append the next key to that file.
setting up ssh
==============
For making ssh work with kerberos, we need `/etc/krb5/krb5.conf` and `/etc/krb5/krb5.keytab`. In the step before, we already moved the `krb5.keytab`. We can copy the `krb5.conf` from the kerberos server to the ssh server.
Now you can start the ssh deamon.
try to log in
=============
For the test, we will try to connect to the ssh host from the kerberos host. So start a shell on the kerberos server and type `kinit`. This should ask for your password. If it was correct, `klist` should show you, that you have been granted a ticket.
Now try to open a ssh session to the server, with `-v` set for more informations and it should work.
problems that can occur
=======================
no default realm
----------------
The is the message
kinit(v5): Configuration file does not specify default realm when parsing name gibheer
which hints, that your `/etc/krb5/krb5.conf` is missing.
client/principal not found
--------------------------
The message
kinit(v5): Client 'foo@PROD.LAN' not found in Kerberos database while getting initial credentials
is a hint, that you forgot to add the principal or that your username could not be found. Just add the principal with `kadmin` and it should work.
ssh does not use kerberos
-------------------------
If ssh does not want to use kerberos at all, check for the GSSAPI options. These should be enabled by default, but can be disabled. If that's the case, add the following line to your `sshd_config`.
GSSAPIAuthentication yes
After a restart, ssh should use kerberos for authentication.
links
=====
* [setup of kerberos on opensolaris](http://www.linuxtopia.org/online_books/opensolaris_2008/SYSADV6/html/setup-148.html)
* [MIT kerberos page](http://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-admin/krb5_002econf.html)
* [KDC Setup on Solaris](http://wiki.creatica.org/cgi-bin/wiki.pl/Kerberos_KDC_server_on_Solaris)
* [Kerberos password](http://fnal.gov/docs/strongauth/princ_pw.html#46115)
* [Kerberos policies](http://pig.made-it.com/kerberos-policy.html)
* [Administrative Guide to Kerberos](http://techpubs.spinlocksolutions.com/dklar/kerberos.html#err_server_not_found)
one last word
=============
I have one last word for you: Kerberos does not do authorization!
That means, that kerberos can not say, if one principal is allowed to use a service or not. It just manages the authentication for you.
If you want to manage the access, there are some possibilities for that. One is to use ldap, often used in conjunction with kerberos. Or you manage the `passwd` files or any other file yourself or you use a service like [chef](http://wiki.opscode.com/display/chef/Home) or [puppet](http://puppetlabs.com/).
changelog
=========
This time, we will build a base kerberos setup. At the end, you will be able to login into another machine using kerberos only.
You need the following things, to make kerberos work:
* a working dns server
* 2 servers
I will explain this setup on an openindiana system with 2 zones. `kerberosp1` will be my kerberos machine and `sshp1` will be my ssh server with kerberos support.
setup of kerberos
=================
The setup of kerberos was pretty easy, after reading 3 tutorials about it. The essential part here is to decide, how the realm and the admin account should be called.
To start the setup, call `kdcmgr`. At first, it asks your realm, which you should name like your domain.
After that, you have to generate an admin principal.A principal is like an account for a user or admin. But it's also used for services. I named mine `kerberosp1/admin`. Give it a safe password and you are done.
Now you should have an populated `/etc/krb5/` directory. Open the file `kdc.conf` in that directory and search for `max_life`. It was set to 8 hours for me, which was too long. Adjust the value to 4h or 16h, like you want. I did the same with `max_renewable_life`.
Edit: You should add the following option in the realms section to your realm.
kpasswd_protocol = SET_CHANGE
Kerberos uses a separate protocol for changing the password of principals. A RPC like protocol is used in the solaris version and microsoft has another one too. So the only option compatible on all is `SET_CHANGE`. But to make things worse, the solaris default does not even work in an internal network. So just add this entry and save some stress from trying to find out, why this is not working.
setting up some accounts
========================
To use the kerberos service, check first, if the kdc is running and start it, if it's not. For openindiana, the check is
`svcs krb5kdc`
which should return online.
After that, as root start the kerberos shell with `kadmin.local`. This is a management shell to create, delete and modify principals.
Here we are going to create some policies. With these, we can set some minimal standards, like the minimum password length.
I created three policies. An `admin`, `user` and a `service` policy. These got the following settings:
* admin
* minlength 8
* minclasses 3
* user
* minlength 8
* minclasses 2
* service
* minlength 12
* minclasses 4
This sets some password limitations for every principal group I have. `minclasses` is used for different types of characters. There are lower case, upper case, numbers, punctation and other characters.
The create a new policy use the command `addpol` or `add_policy` with `-minlength` and `-minclasses`. You can simply type the command to get some help or read the man page.
After creating the policies, we have to create some principals. First, we should create one for ourselves. You can do this with the command `addprinc` or `add_principal`. Give it a policy with the argument `-policy` and a name. You will have to input a password for this principal according to the policies.
You can use this scheme to create user accounts too. For that, you can generate a password for them with the program `pwgen`. It's pretty helpful and can generate pretty complex passwords, so that should be best.
Now we need a principal for our ssh server. The name of this principal should be `host/name_of_service.your.domain.name`, so in my case, it is `host/sshp1.prod.lan`. But I did not want to generate any password and added the argument `-randkey` which generates a password according to the policies we set.
Now we have to export the key of the last principal into a keytab file, that can be read by the service, which wants to use it. This is done with the command `ktadd` like this
`ktadd -k /etc/krb5.keytab host/sshp1.prod.lan`
This generates our file in /etc/krb5.keytab. Copy this file into the kerberos directory (on openindiana it's `/etc/krb5/`) and delete the one on the kerberos host. This is important, as another execution of ktadd will append the next key to that file.
setting up ssh
==============
For making ssh work with kerberos, we need `/etc/krb5/krb5.conf` and `/etc/krb5/krb5.keytab`. In the step before, we already moved the `krb5.keytab`. We can copy the `krb5.conf` from the kerberos server to the ssh server.
Now you can start the ssh deamon.
try to log in
=============
For the test, we will try to connect to the ssh host from the kerberos host. So start a shell on the kerberos server and type `kinit`. This should ask for your password. If it was correct, `klist` should show you, that you have been granted a ticket.
Now try to open a ssh session to the server, with `-v` set for more informations and it should work.
problems that can occur
=======================
no default realm
----------------
The is the message
kinit(v5): Configuration file does not specify default realm when parsing name gibheer
which hints, that your `/etc/krb5/krb5.conf` is missing.
client/principal not found
--------------------------
The message
kinit(v5): Client 'foo@PROD.LAN' not found in Kerberos database while getting initial credentials
is a hint, that you forgot to add the principal or that your username could not be found. Just add the principal with `kadmin` and it should work.
ssh does not use kerberos
-------------------------
If ssh does not want to use kerberos at all, check for the GSSAPI options. These should be enabled by default, but can be disabled. If that's the case, add the following line to your `sshd_config`.
GSSAPIAuthentication yes
After a restart, ssh should use kerberos for authentication.
links
=====
* [setup of kerberos on opensolaris](http://www.linuxtopia.org/online_books/opensolaris_2008/SYSADV6/html/setup-148.html)
* [MIT kerberos page](http://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-admin/krb5_002econf.html)
* [KDC Setup on Solaris](http://wiki.creatica.org/cgi-bin/wiki.pl/Kerberos_KDC_server_on_Solaris)
* [Kerberos password](http://fnal.gov/docs/strongauth/princ_pw.html#46115)
* [Kerberos policies](http://pig.made-it.com/kerberos-policy.html)
* [Administrative Guide to Kerberos](http://techpubs.spinlocksolutions.com/dklar/kerberos.html#err_server_not_found)
one last word
=============
I have one last word for you: Kerberos does not do authorization!
That means, that kerberos can not say, if one principal is allowed to use a service or not. It just manages the authentication for you.
If you want to manage the access, there are some possibilities for that. One is to use ldap, often used in conjunction with kerberos. Or you manage the `passwd` files or any other file yourself or you use a service like [chef](http://wiki.opscode.com/display/chef/Home) or [puppet](http://puppetlabs.com/).
changelog
=========
* added some explanation to `kpasswd_protocol`

View File

@ -5,13 +5,13 @@ author = "Gibheer"
draft = false
+++
There is a bug in openindiana that does not let you get the content of a page with curl, when it's secured with ssl. The cause of this is an option set on compile time. This option is the the path to the certificate storage.
In the case of openindiana this is set to `/etc/curl/curlCA`, but all certificates reside in `/etc/certs/CA/`. This leads to the following error message, when you try it:
curl: (77) error setting certificate verify locations
To fix this, run the following script.
mkdir /etc/curl && cat /etc/certs/CA/*.pem > /etc/curl/curlCA
There is a bug in openindiana that does not let you get the content of a page with curl, when it's secured with ssl. The cause of this is an option set on compile time. This option is the the path to the certificate storage.
In the case of openindiana this is set to `/etc/curl/curlCA`, but all certificates reside in `/etc/certs/CA/`. This leads to the following error message, when you try it:
curl: (77) error setting certificate verify locations
To fix this, run the following script.
mkdir /etc/curl && cat /etc/certs/CA/*.pem > /etc/curl/curlCA
This writes all certificates of the default CA in the file curl is looking for and after that, it works.

View File

@ -5,95 +5,95 @@ author = "Gibheer"
draft = true
+++
Hey there! This time, we will get rubinius running on openindiana. As there is not package for llvm yet, it get's compiled within the build.
I got it this far because of crsd. He told me how to get llvm running, so that we could get rubinius to compile.
After that [dbussink](https://twitter.com/#!/dbussink) got rbx to compile within two days! He found some really strange things, but in the end, rubinius can run on a solaris platform!
requirements
============
But first, you have to fulfill some requirements. First you have to add the sfe publisher to get the gcc4.
You can do that with the command
pkg set-publisher -O http://pkg.openindiana.org/sfe sfe
After that install the following packages
* developer/gcc-3
* system/header
* system/library/math/header-math
* gnu-tar
* gnu-make
* gnu-binutils
* gnu-coreutils
* gnu-findutils
* gnu-diffutils
* gnu-grep
* gnu-patch
* gnu-sed
* gawk
* gnu-m4
* bison
* git
Yeah, that's alot of gnu, but we need it to get everything going. The cause of this are the old versions of solaris software, which do not support many features. The default compiler is even gcc 3.4.3!
After you have installed these packages, install the following package from sfe.
* runtime/gcc
The order is important, as gcc3 and gcc4 set symlinks in /usr/bin. If you install them in another order, the symlink is not correct and you end up having a lot of work.
some patching
=============
After that, we have to fix a small bug in gcc with editing the file `/usr/include/spawn.h`.
73,76d72
< #ifdef __cplusplus
< char *const *_RESTRICT_KYWD argv,
< char *const *_RESTRICT_KYWD envp);
< #else
79d74
< #endif
86,89d80
< #ifdef __cplusplus
< char *const *_RESTRICT_KYWD argv,
< char *const *_RESTRICT_KYWD envp);
< #else
92d82
< #endif
This fixes a bug in gcc with [the __restrict key word](http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49347).
fix the path
============
Now that we installed and fix a bunch of things, we need to include the gnu path into our own. Use the following command to get this done
export PATH="/usr/gnu/bin:$PATH"
Yes, it needs to be at the first place or else one of the old solaris binaries get's chosen and then, nothing works and produces weired errors.
getting rbx to compile
======================
with an own build
-----------------
If you want to build rbx yourself, get the code from [https://github.com/rubinius/rubinius.git](https://github.com/rubinius/rubinius.git). After that, configure and rake and everything should be fine.
with rvm
---------
If you want to get it working with rvm, install rvm like normal. After that you can simply install rbx with
rvm install rbx
That's all you need.
conclusion
==========
Hey there! This time, we will get rubinius running on openindiana. As there is not package for llvm yet, it get's compiled within the build.
I got it this far because of crsd. He told me how to get llvm running, so that we could get rubinius to compile.
After that [dbussink](https://twitter.com/#!/dbussink) got rbx to compile within two days! He found some really strange things, but in the end, rubinius can run on a solaris platform!
requirements
============
But first, you have to fulfill some requirements. First you have to add the sfe publisher to get the gcc4.
You can do that with the command
pkg set-publisher -O http://pkg.openindiana.org/sfe sfe
After that install the following packages
* developer/gcc-3
* system/header
* system/library/math/header-math
* gnu-tar
* gnu-make
* gnu-binutils
* gnu-coreutils
* gnu-findutils
* gnu-diffutils
* gnu-grep
* gnu-patch
* gnu-sed
* gawk
* gnu-m4
* bison
* git
Yeah, that's alot of gnu, but we need it to get everything going. The cause of this are the old versions of solaris software, which do not support many features. The default compiler is even gcc 3.4.3!
After you have installed these packages, install the following package from sfe.
* runtime/gcc
The order is important, as gcc3 and gcc4 set symlinks in /usr/bin. If you install them in another order, the symlink is not correct and you end up having a lot of work.
some patching
=============
After that, we have to fix a small bug in gcc with editing the file `/usr/include/spawn.h`.
73,76d72
< #ifdef __cplusplus
< char *const *_RESTRICT_KYWD argv,
< char *const *_RESTRICT_KYWD envp);
< #else
79d74
< #endif
86,89d80
< #ifdef __cplusplus
< char *const *_RESTRICT_KYWD argv,
< char *const *_RESTRICT_KYWD envp);
< #else
92d82
< #endif
This fixes a bug in gcc with [the \_\_restrict key word](http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49347).
fix the path
============
Now that we installed and fix a bunch of things, we need to include the gnu path into our own. Use the following command to get this done
export PATH="/usr/gnu/bin:$PATH"
Yes, it needs to be at the first place or else one of the old solaris binaries get's chosen and then, nothing works and produces weired errors.
getting rbx to compile
======================
with an own build
-----------------
If you want to build rbx yourself, get the code from [https://github.com/rubinius/rubinius.git](https://github.com/rubinius/rubinius.git). After that, configure and rake and everything should be fine.
with rvm
---------
If you want to get it working with rvm, install rvm like normal. After that you can simply install rbx with
rvm install rbx
That's all you need.
conclusion
==========
After dbussink fixed all the errors, rbx compiles fine, when the toolchain is there. To get to this point was not easy, but we did it. So have a lot of fun with hacking on and using rubinius!

View File

@ -5,18 +5,18 @@ author = "Gibheer"
draft = false
+++
If you encounter the following error with `make install`
find: cycle detected for /lib/secure/32/
find: cycle detected for /lib/crypto/32/
find: cycle detected for /lib/32/
find: cycle detected for /usr/lib/elfedit/32/
find: cycle detected for /usr/lib/secure/32/
find: cycle detected for /usr/lib/link_audit/32/
find: cycle detected for /usr/lib/lwp/32/
find: cycle detected for /usr/lib/locale/en_US.UTF-8/32/
find: cycle detected for /usr/lib/locale/en_US.UTF-8/LO_LTYPE/32/
find: cycle detected for /usr/lib/locale/en_US.UTF-8/LC_CTYPE/32/
find: cycle detected for /usr/lib/32/
If you encounter the following error with `make install`
find: cycle detected for /lib/secure/32/
find: cycle detected for /lib/crypto/32/
find: cycle detected for /lib/32/
find: cycle detected for /usr/lib/elfedit/32/
find: cycle detected for /usr/lib/secure/32/
find: cycle detected for /usr/lib/link_audit/32/
find: cycle detected for /usr/lib/lwp/32/
find: cycle detected for /usr/lib/locale/en_US.UTF-8/32/
find: cycle detected for /usr/lib/locale/en_US.UTF-8/LO_LTYPE/32/
find: cycle detected for /usr/lib/locale/en_US.UTF-8/LC_CTYPE/32/
find: cycle detected for /usr/lib/32/
use `ginstall` in your Makefile instead of `install`. It seems just broken on solaris.

View File

@ -9,7 +9,7 @@ draft = true
mein letztes System hat also über zwei Jahre gehalten. Ihr werdet euch
vielleicht (noch/nicht) erinnern an:\
http://zero-knowledge.org/post/25
[Lustige Gehversuche mit ...](/post/25.md)
Nun brachten mit die (un)glücklichen Umstände eines sterbenden
Monitorkabels dazu mein geliebtes Hermelin gegen die Grinsekatze

View File

@ -5,21 +5,21 @@ author = "Gibheer"
draft = false
+++
Two weeks ago, I had a problem with installing rubygems on my laptop. Yesterday, another person had the same problem, so I will document what is wrong here.
The problem itself manifests in the way, that it installs gems with the error message
WARNING: You don't have /home/steven/.gem/rbx/1.8/bin in your PATH,
gem executables will not run.
If you then want to use the binary provided with the gem, it will not work and it happens with all ruby versions, be it rubinius, jruby or 1.9. What makes it worse is the fact, that it only occurs on archlinux installations, till now. And it is not a problem of rvm!
So if you are on archlinux, look into `/etc/gemrc`. There will be a line saying
gemrc: --user-install
To solve the problem, create a file `~/.gemrc` and put the line
gemrc:
Two weeks ago, I had a problem with installing rubygems on my laptop. Yesterday, another person had the same problem, so I will document what is wrong here.
The problem itself manifests in the way, that it installs gems with the error message
WARNING: You don't have /home/steven/.gem/rbx/1.8/bin in your PATH,
gem executables will not run.
If you then want to use the binary provided with the gem, it will not work and it happens with all ruby versions, be it rubinius, jruby or 1.9. What makes it worse is the fact, that it only occurs on archlinux installations, till now. And it is not a problem of rvm!
So if you are on archlinux, look into `/etc/gemrc`. There will be a line saying
gemrc: --user-install
To solve the problem, create a file `~/.gemrc` and put the line
gemrc:
in it. By doing that, the file `/etc/gemrc` will be ignored. And if you are manipulating that file, look into [all the other options](http://docs.rubygems.org/read/chapter/11) you can set.

View File

@ -5,87 +5,87 @@ author = "Gibheer"
draft = false
+++
After my openindiana server is already running for 4 months straight, I thought, I write a bit about the ecosystem of Illumos and its state.
Illumos ecosystem
=================
Illumos is the base system of which every distribution uses. It's more or less
the base system, like FreeBSD. With Solaris 11 being the original OpenSolaris,
Illumos is a fork of what was open source of OpenSolaris in 2010.
The development on Illumos is pretty active and at the moment, there is no merge with the Solaris code base planned. Oracle distributed code after the Solaris 11 release, but it was mostly code which had to be distributed either way. So there were no updates on kernel or ZFS code.
This has a huge impact on the future development of Illumos as everything has to be developed by contributors like Joyent, Nexenta and others. But it also has implications for most of the core features of Solaris, the most important ZFS. These are already noticeable with Solaris 11 having ZFS version 31 and FreeBSD and Illumos having version 28. This means, that neither FreeBSD nor Illumos can do something with a zpool created on a Solaris 11. This already makes a switch from one system to another difficult.
But nevertheless the contributors to Illumos work to make it better. The largest part at the moment is to get Illumos compiling with GCC 4.6.1. At the first look, it seems like a minor problem, but OpenSolaris was not written to be built with GCC but with the proprietary SunStudio. As far as I could see, this has some major implications and raised huge holes in the code, which has to get fixed.
With that the base system is also upgraded from older versions of Perl and python, which also will be a longer process.
Another huge part is the process of building packages. Solaris 10 and older used the SVR4 format. That was pretty simple and looked like rpm. OpenSolaris introduced a new format named IPS - Image Packaging System. This is also compatible with the SVR4 format. OpenSolaris had a pretty big infrastructure for building IPS packages, but it was lost when oracle acquired sun and shut it down.
The problem now is, how to build new packages. Some are using SVR4 to build the IPS packages, which works good and the repository already has a bunch of newer releases of many projects.
Another attempt was to use pkgsrc. This is a project of NetBSD and already supports Solaris. This attempt died pretty fast. They were not used like FreeBSD ports and also not for compiling the packages.
The third approach is to build a packing system on top of dpkg/apt. It is a collaboration between Nexenta, OpenIndiana and others. There is also a plan to build a new distribution out of it - named illumian.
One major difference between Solaris 11 and Illumos is that Illumos has KVM. It got ported from Linux by Joyent and works pretty good. With this step, Illumos not only had Zones for virtualization but also a full virtualization to get Linux running.
distribution ecosystem
======================
There are a bunch of distributions out there, trying to solve different problems.
[Solaris 11 - the first cloud os][solaris11]
----------
Not so much a distribution of Illumos, but of the old OpenSolaris. Solaris 11 is a pretty good allround distribution. It is used from small systems to huge ones, running one application or some hundred on one machine. Some use it for storage and others to virtualize the hell out of it with zones and crossbow.
[OpenIndiana - open source and enterprise][openindiana]
-----------
OpenIndiana was one of the first distributions using the Illumos core. It is available as a server distribution and a desktop one. The server one is targeted for the same usage as Solaris 11. As OpenIndiana uses Illumos it also has support for KVM and therefore can be used as a platform to host many fully virtualized instances on top of ZFS and crossbow infrastructure.
A problem at the moment is the pretty old software it offers. Most of the packages are from OpenSolaris and therefore nearly 2 years old. Most of them don't even get security patches. The reason for that is the packaging topic mentioned above. As long as they don't have a strategy, nothing will change here. The only option is to use the sfe repo at the moment.
This may change in the future, because of the joint effort with Nexenta of packaging releases.
OpenIndiana also has a desktop part which is targeted at ubuntu users wanting ZFS and time machine. As I used OpenSolaris already on a laptop I can only say "Yes, it works". But you have to decide yourself, if you can live with pretty old but stable software. And many projects are not even available in package form, so that one would have to compile it yourself.
[Nexenta - enterprise storage for everyone][nexenta]
-------
Nexenta is another distribution who switched to Illumos core pretty fast. It is intended to be used for storage systems, but can also be used for other kinds of servers. It also uses the debian package system and a gnu userland. It is available as a community edition and "enterprise" edition.
The packages are a bit more up to date than the OpenIndiana ones. With the combined effort of both projects, they may keep closer to the actual releases.
[illumian - illumos + debian package management][illumian]
--------
Illumian is a new project and collaboration work between Nexenta and OpenIndiana. It will provide packages through the debian package management dpkg/apt. The target audience seems to be the same as OpenIndiana. The plan at the moment is to release all packages in the same version as in OpenIndiana, so that the ultimate choice will just be, if you want to use dpkg or IPS.
[SmartOS - the complete modern operating system][smartos]
-------
This is not so much a distribution as a live image. Its purpose is to use all disks in the server to create a zpool and use that to provide storage for virtual machines, be it zones or KVM instances. The KVM instances are also put into zones to attach dtrace to the virtual instances to see, what's going on in that instance.
SmartOS offers also pretty nice wrappers around the VM operating to get new instances up fast.
The company behind SmartOS is Joyent, more known for building node.js. They use SmartOS as the central pillar of their own JoyentCloud, where they host node.js applications, databases and also Linux machines.
[omnios][omnios]
------
OmniOS is a very new distribution and from OmniIT. It offers not much at the moment apart from an ISO image and a small wiki.
It is intended to be used much like FreeBSD. They provide a very stripped down Illumos core with updated packages as far as possible and nothing more. Every other package one might need has to be built and distributed through a package repository. The reason behind this is, that they only want to provide the basic image, which everybody needs, but not the packages needed only by themselves. And even these packages may be one or two versions behind.
And let me tell you - the packages they already updated may be considered bleeding edge by many debian stable users.
What next?
==========
This was the excursion into the world of Illumos based distributions. I myself will switch away from OpenIndiana. It's great, that Illumos lives and breathes more than 4 months ago, but there is much work left to do. SmartOS had a huge impact for me and others and Joyent and Nexenta do great work on improving the Ecosystem.
But it will be hard to get back to the times where OpenSolaris was. Too much time went by unused. But I'm looking forward what else might come up of Illumos land.
[solaris11]: http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html "Solaris 11"
[illumos]: http://illumos.org/ "the illumos project"
[openindiana]: http://openindiana.org/ "OpenIndiana"
[smartos]: http://smartos.org/ "SmartOS - the complete modern operating system"
[illumian]: http://illumian.org/ "illumian"
[nexenta]: http://nexentastor.org/ "Nexenta - the storage platform"
After my openindiana server is already running for 4 months straight, I thought, I write a bit about the ecosystem of Illumos and its state.
Illumos ecosystem
=================
Illumos is the base system of which every distribution uses. It's more or less
the base system, like FreeBSD. With Solaris 11 being the original OpenSolaris,
Illumos is a fork of what was open source of OpenSolaris in 2010.
The development on Illumos is pretty active and at the moment, there is no merge with the Solaris code base planned. Oracle distributed code after the Solaris 11 release, but it was mostly code which had to be distributed either way. So there were no updates on kernel or ZFS code.
This has a huge impact on the future development of Illumos as everything has to be developed by contributors like Joyent, Nexenta and others. But it also has implications for most of the core features of Solaris, the most important ZFS. These are already noticeable with Solaris 11 having ZFS version 31 and FreeBSD and Illumos having version 28. This means, that neither FreeBSD nor Illumos can do something with a zpool created on a Solaris 11. This already makes a switch from one system to another difficult.
But nevertheless the contributors to Illumos work to make it better. The largest part at the moment is to get Illumos compiling with GCC 4.6.1. At the first look, it seems like a minor problem, but OpenSolaris was not written to be built with GCC but with the proprietary SunStudio. As far as I could see, this has some major implications and raised huge holes in the code, which has to get fixed.
With that the base system is also upgraded from older versions of Perl and python, which also will be a longer process.
Another huge part is the process of building packages. Solaris 10 and older used the SVR4 format. That was pretty simple and looked like rpm. OpenSolaris introduced a new format named IPS - Image Packaging System. This is also compatible with the SVR4 format. OpenSolaris had a pretty big infrastructure for building IPS packages, but it was lost when oracle acquired sun and shut it down.
The problem now is, how to build new packages. Some are using SVR4 to build the IPS packages, which works good and the repository already has a bunch of newer releases of many projects.
Another attempt was to use pkgsrc. This is a project of NetBSD and already supports Solaris. This attempt died pretty fast. They were not used like FreeBSD ports and also not for compiling the packages.
The third approach is to build a packing system on top of dpkg/apt. It is a collaboration between Nexenta, OpenIndiana and others. There is also a plan to build a new distribution out of it - named illumian.
One major difference between Solaris 11 and Illumos is that Illumos has KVM. It got ported from Linux by Joyent and works pretty good. With this step, Illumos not only had Zones for virtualization but also a full virtualization to get Linux running.
distribution ecosystem
======================
There are a bunch of distributions out there, trying to solve different problems.
[Solaris 11 - the first cloud os][solaris11]
----------
Not so much a distribution of Illumos, but of the old OpenSolaris. Solaris 11 is a pretty good allround distribution. It is used from small systems to huge ones, running one application or some hundred on one machine. Some use it for storage and others to virtualize the hell out of it with zones and crossbow.
[OpenIndiana - open source and enterprise][openindiana]
-----------
OpenIndiana was one of the first distributions using the Illumos core. It is available as a server distribution and a desktop one. The server one is targeted for the same usage as Solaris 11. As OpenIndiana uses Illumos it also has support for KVM and therefore can be used as a platform to host many fully virtualized instances on top of ZFS and crossbow infrastructure.
A problem at the moment is the pretty old software it offers. Most of the packages are from OpenSolaris and therefore nearly 2 years old. Most of them don't even get security patches. The reason for that is the packaging topic mentioned above. As long as they don't have a strategy, nothing will change here. The only option is to use the sfe repo at the moment.
This may change in the future, because of the joint effort with Nexenta of packaging releases.
OpenIndiana also has a desktop part which is targeted at ubuntu users wanting ZFS and time machine. As I used OpenSolaris already on a laptop I can only say "Yes, it works". But you have to decide yourself, if you can live with pretty old but stable software. And many projects are not even available in package form, so that one would have to compile it yourself.
[Nexenta - enterprise storage for everyone][nexenta]
-------
Nexenta is another distribution who switched to Illumos core pretty fast. It is intended to be used for storage systems, but can also be used for other kinds of servers. It also uses the debian package system and a gnu userland. It is available as a community edition and "enterprise" edition.
The packages are a bit more up to date than the OpenIndiana ones. With the combined effort of both projects, they may keep closer to the actual releases.
[illumian - illumos + debian package management][illumian]
--------
Illumian is a new project and collaboration work between Nexenta and OpenIndiana. It will provide packages through the debian package management dpkg/apt. The target audience seems to be the same as OpenIndiana. The plan at the moment is to release all packages in the same version as in OpenIndiana, so that the ultimate choice will just be, if you want to use dpkg or IPS.
[SmartOS - the complete modern operating system][smartos]
-------
This is not so much a distribution as a live image. Its purpose is to use all disks in the server to create a zpool and use that to provide storage for virtual machines, be it zones or KVM instances. The KVM instances are also put into zones to attach dtrace to the virtual instances to see, what's going on in that instance.
SmartOS offers also pretty nice wrappers around the VM operating to get new instances up fast.
The company behind SmartOS is Joyent, more known for building node.js. They use SmartOS as the central pillar of their own JoyentCloud, where they host node.js applications, databases and also Linux machines.
[omnios][omnios]
------
OmniOS is a very new distribution and from OmniIT. It offers not much at the moment apart from an ISO image and a small wiki.
It is intended to be used much like FreeBSD. They provide a very stripped down Illumos core with updated packages as far as possible and nothing more. Every other package one might need has to be built and distributed through a package repository. The reason behind this is, that they only want to provide the basic image, which everybody needs, but not the packages needed only by themselves. And even these packages may be one or two versions behind.
And let me tell you - the packages they already updated may be considered bleeding edge by many debian stable users.
What next?
==========
This was the excursion into the world of Illumos based distributions. I myself will switch away from OpenIndiana. It's great, that Illumos lives and breathes more than 4 months ago, but there is much work left to do. SmartOS had a huge impact for me and others and Joyent and Nexenta do great work on improving the Ecosystem.
But it will be hard to get back to the times where OpenSolaris was. Too much time went by unused. But I'm looking forward what else might come up of Illumos land.
[solaris11]: http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html "Solaris 11"
[illumos]: http://illumos.org/ "the illumos project"
[openindiana]: http://openindiana.org/ "OpenIndiana"
[smartos]: http://smartos.org/ "SmartOS - the complete modern operating system"
[illumian]: http://illumian.org/ "illumian"
[nexenta]: http://nexentastor.org/ "Nexenta - the storage platform"
[omnios]: http://omnios.omniti.com "OmniOS from OmniTI"

View File

@ -5,18 +5,18 @@ author = "Gibheer"
draft = false
+++
Okay, let's say you are a proud owner of a system and use ZFS. Now lets assume that you lost a disk from your storage and want a fast backup of your data without the hassle of packing up everything, checking for permissions and so on. If the target system has ZFS too, then this will be fun for you, because I will show you, how to make a backup of a ZFS partition and all its descendants in some small steps.
First, you have to build a recursive snapshot for the backup. This can be done with
zfs snapshot -r tank/testpartition@backup-today
After that the real magic happens. We send this snapshot over ssh and import it on the other side.
zfs send -R tank/testpartition@backup-today | ssh target.machine "zfs recv -u tank/backup-machine"
Now all partitions from =tank/testpartition= will be put in =tank/backup-machine= and everything will be preserved. Links will be links, permissions will be the same. The flag =-u= is to prevent mounting the partitions on the target machine or else all partitions will be mounted as they were before.
As this sends the complete dataset over the wire, it is not that usable for backups every day. For this use case, use incremental sends (with the option =-i=). On the receiving side, nothing changes.
Okay, let's say you are a proud owner of a system and use ZFS. Now lets assume that you lost a disk from your storage and want a fast backup of your data without the hassle of packing up everything, checking for permissions and so on. If the target system has ZFS too, then this will be fun for you, because I will show you, how to make a backup of a ZFS partition and all its descendants in some small steps.
First, you have to build a recursive snapshot for the backup. This can be done with
zfs snapshot -r tank/testpartition@backup-today
After that the real magic happens. We send this snapshot over ssh and import it on the other side.
zfs send -R tank/testpartition@backup-today | ssh target.machine "zfs recv -u tank/backup-machine"
Now all partitions from =tank/testpartition= will be put in =tank/backup-machine= and everything will be preserved. Links will be links, permissions will be the same. The flag =-u= is to prevent mounting the partitions on the target machine or else all partitions will be mounted as they were before.
As this sends the complete dataset over the wire, it is not that usable for backups every day. For this use case, use incremental sends (with the option =-i=). On the receiving side, nothing changes.
Thanks at this point to [shl](http://blogs.interdose.com/sebastian/) for showing me ZFS.

View File

@ -5,30 +5,30 @@ author = "Stormwind"
draft = false
+++
Hallo ihr,
==========
da habe ich es doch tatsächlich völlig verschwitzt letztes Jahr auch von dem Wollfest in Nierstein zu berichten. Also habe ich nun kurz entschlossen die Nummerierung etwas angepasst.
Dieses Jahr bin ich schon ganz früh dran mit Wollfesten. (Im Übrigen nicht zu verwechseln mit dem bösen Wollfasten.)
In Backnang fand nämlich vorgestern und gestern das [2. Backnanger Wollfest](http://www.backnanger-wollfest.de/) statt und an dem Samstag war ich dann auch dabei.
<div style="text-align:center">
<img src="/images/wollfest2012-1.jpg" alt="Wollbeute Teil 1" /><br /><br />
</div>
Somit konnte ich meine schon vorhandenen Wollvorräten mit noch mehr Wolle weiter aufstocken, wie man auf den Bildern unschwer erkennen kann.
Aus dem Hause Zitron gibt es jetzt ein tolles 4-fach Sockengarn, was aus zwei normalen weißen und zwei schon vorgefärbt schwarzen Fäden besteht, was bedeutet, dass es jetzt auch wunderschön dunkle Sockenwolle vom Wolldrachen gibt, die nicht zum Teil komplett Schwarz sein muss. War ja klar, dass ich mir da wieder zwei Stränge unter den Nagel reißen musste.
Auch sehr schön, die beiden Kammzüge aus Seide einmal in grün und einmal in orange aus 100% Tussah Seide. (Ich schmachte dahin, es ist so wundervoll weich.) Da habe ich auch schon eine Idee was es werden soll, jetzt müsste es also nur noch versponnen und verzwirnt werden. Aber dazu fehlt mir noch den Faden, den ich zum verzwirnen benutzen möchte, den habe ich aber schon bestellt. Jetzt muss er nur noch hier ankommen. Ihr dürft also gespannt sein. Und ich bin es auch, obs am Ende so wird, wie ich das möchte.
Ich habe auch noch zwei weitere Kammzüge mitgebracht, allerdings sieht man den zweiten auf dem Foto nicht, da er schon zu 50% meinem Spinnrad zum Opfer gefallen ist.
<div style="float:right">
<img src="/images/wollfest2012-2.jpg" alt="Wollbeute Teil 2" />