initial commit and import
This is the initial import of all blog posts and a pretty workable theme. More work will follow.master
commit
02f704abe1
@ -0,0 +1 @@
|
||||
public/
|
@ -0,0 +1,9 @@
|
||||
baseurl = "http://zero-knowledge.org/"
|
||||
languageCode = "en-us"
|
||||
title = "zero-knowledge"
|
||||
theme = "zero"
|
||||
SectionPagesMenu = "main"
|
||||
|
||||
[taxonomies]
|
||||
author = "author"
|
||||
tag = "tags"
|
@ -0,0 +1,11 @@
|
||||
+++
|
||||
date = "2015-10-11T20:00:29+02:00"
|
||||
draft = true
|
||||
title = "about"
|
||||
|
||||
+++
|
||||
|
||||
## about zero-knowledge
|
||||
|
||||
This blog is the personal blog of Gibheer and Stormwind, where we write about
|
||||
any topic from IT which keeps us working at the moment.
|
@ -0,0 +1,34 @@
|
||||
+++
|
||||
title = "zero-knowledge ohne Forum"
|
||||
date = "2009-05-04T18:54:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Hallo lieber Besucher,
|
||||
|
||||
ja, zero-knowledge hat kein Forum mehr.
|
||||
|
||||
Nach gut 3 Jahren, in denen das Forum mal mehr, mal weniger aktiv
|
||||
benutzt wurde, haben wir uns dazu entschlossen, das Forum endgültig
|
||||
abzuschalten. Zum einen war das Forum in den letzten 2 oder 3 Monaten
|
||||
gar nicht mehr besucht worden und zum anderen war die Forensoftware für
|
||||
uns nicht mehr updatebar.
|
||||
|
||||
Das myBB an sich war eine wirklich gute Software. Allerdings haben sie
|
||||
den PostgreSQL-Support eher missachtet und Fehler darauf geschoben,
|
||||
anstatt zu versuchen den Fehler selber zu beheben. Da hat sich dann auch
|
||||
der Aufwand nicht mehr gelohnt, für ein Inaktives Forum noch ein Update
|
||||
aufzuspielen.
|
||||
|
||||
Damit die Domain allerdings nicht komplett versauert und Blub sein zu
|
||||
Hause behalten kann, haben Stormwind und ich uns dazu entschlossen hier
|
||||
einen Blog einzurichten, um euch wenigstens so ab und zu noch etwas über
|
||||
Scripte und Neuerungen erzählen zu können.
|
||||
|
||||
Ich hoffe einfach mal, dass sich die Zeit ändert und das zero-knowledge
|
||||
Forum vielleicht mal wieder auferstehen kann.
|
||||
|
||||
Doch bis dahin,
|
||||
|
||||
Willkommen im Blog
|
@ -0,0 +1,72 @@
|
||||
+++
|
||||
title = "Woche der Updates"
|
||||
date = "2009-07-01T08:58:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Irgendwie ist diese Woche die Woche der Updates. Es gab Updates für
|
||||
[NetBeans 6.7](http://netbeans.org) (eine IDE für verschiedene
|
||||
Programmiersprachen), [VirtualBox](http://www.virtualbox.org/)
|
||||
(Virtuelle Maschinen), [PHP 5.3.0](http://php.net) und [Firefox
|
||||
3.5](http://www.mozilla-europe.org/de/firefox/).
|
||||
|
||||
Jedes dieser Updates stellt dabei einen grossen Schritt nach vorne dar
|
||||
und bei jedem Programm sind eine Menge neuer Funktionen hinzugekommen
|
||||
und viele Bugs beseitigt worden.
|
||||
|
||||
Update 01.07.2009 - 15:00Uhr: [PostgreSQL 8.4 ist gerade
|
||||
rausgekommen](http://www.postgresql.org/about/news.1108)
|
||||
|
||||
Eine grosse Neuerung bei **NetBeans** ist die direkte Anbindung der IDE
|
||||
an Kenai. [Kenai](http://www.kenai.com) ist im Grunde eine Platform wie
|
||||
Sourceforge, die von Sun für freie Projekte entwickelt wurde.
|
||||
|
||||
Mit der direkten Anbindung kann man somit all seine Projekte &uum;ber
|
||||
NetBeans steuern.
|
||||
|
||||
Was allerdings für mich interessanter ist, ist die direkte
|
||||
Unterstützung von PHPUnit, Serendipity und die SQL-Code
|
||||
vervollständigung in PHP-Code.
|
||||
|
||||
Für Ruby gibt es jetzt shoulda-unterstützung (ein
|
||||
Testframework - noch nicht genau angeschaut) und endlich Debugging :D.
|
||||
|
||||
Ebenfalls neu ist Groovy and Grails, was als Plugin verfügbar ist.
|
||||
Eine Liste weiterer vieler Änderungen findet ihr
|
||||
[hier](http://www.netbeans.org/community/releases/67/]).
|
||||
|
||||
Bei **VirtualBox** ist vor allem die Mehrprozessorunterstützung und
|
||||
der 3D-Support für DirectX eine super Neuerung. Was an Fehlern
|
||||
behoben wurde, könnt ihr im
|
||||
[Changelog](http://www.virtualbox.org/wiki/Changelog) nachlesen.
|
||||
|
||||
Bei **PHP** weiss ich nicht genau, was ich von dem neuen Release halten
|
||||
soll. Nachdem die Version 6 mehr oder weniger auf Eis gelegt wurde, weil
|
||||
die Entwickler Unicode nicht richtig implementiert bekommen, beinhaltet
|
||||
die Version 5.3 eben alle anderen Änderungen, wie zum Beispiel jump
|
||||
labels, Namespaces, “A garbage collector has been added, and is enabled
|
||||
by default” (gab es da vorher keinen? Oo) und einige andere Sachen.
|
||||
Namespaces kann ich zu einem gewissen Teil noch nachvollziehen, dass man
|
||||
die gebrauchen kann, aber wozu zur Hölle wurden die Jump-Labels
|
||||
implementiert? Dieses Uralte Relikt aus alten Tagen braucht doch kein
|
||||
Mensch mehr und sollte um Himmels willen nie benutzt werden!
|
||||
|
||||
Über **Firefox** wurde eigentlich schon sehr viel berichtet, so
|
||||
dass ich mir hier einen grossen Bericht spare. Ich find es aber auf
|
||||
jeden Fall klasse, dass sie die Mediatags aus HTML5 schon drin haben und
|
||||
das in Verbindung mit OGG als Videoquelle wunderbar funktioniert. Ich
|
||||
bin schon gespannt, was sich da die Leute alles ausdenken werden, um
|
||||
diese Funktion zu benutzen :D.
|
||||
|
||||
Also wie ihr seht, es gibt eine Menge Neuerungen. Aber das ist noch
|
||||
lange nicht das Ende. Inkscape 0.47 und PostgreSQL 8.4 stehen ebenfalls
|
||||
vor der Vollendung, so dass wir uns diesen Sommer wohl noch auf ein paar
|
||||
tolle neue Sachen freuen können.
|
||||
|
||||
Nachtrag 01.07.2009 - 12:15Uhr: Ich hab gerade noch einen Bugreport zu
|
||||
[PHP und Jump labels](http://bugs.php.net/bug.php?id=48669) gefunden.
|
||||
|
||||
Nachtrag2 01.07.2009 - 15:00Uhr: [PostgreSQL 8.4 ist gerade
|
||||
rausgekommen](http://www.postgresql.org/about/news.1108) (dafuer mach
|
||||
ich dann noch mal einen extra Eintrag ;))
|
@ -0,0 +1,67 @@
|
||||
+++
|
||||
title = "use dovecot to store mails with lmtp"
|
||||
date = "2013-11-06T06:37:32+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
After more than a year working on my mail setup, I think I have it running in
|
||||
a pretty good way. As some of the stuff is not documented at all in the wide of
|
||||
the internet, I will post parts here to make it accessible to others.
|
||||
|
||||
Many setups use the MTA (postfix, exim) to store mails on the filesystem. My
|
||||
setup lets [dovecot take care of that][dovecot-lmtp]. That way it is the only
|
||||
process able to change data on the filesystem.
|
||||
|
||||
to make this work, we first need a [lmtp socket][lmtp socket] opened by dovecot.
|
||||
The configuration part looks like this
|
||||
|
||||
service lmtp {
|
||||
unix_listener /var/spool/postfix/private/delivery.sock {
|
||||
mode = 0600
|
||||
user = postfix
|
||||
group = postfix
|
||||
}
|
||||
}
|
||||
|
||||
LMTP is a lightweight smtp protocol and most mail server components can speak
|
||||
it.
|
||||
|
||||
Next we need to tell postfix to send mails to this socket instead storing it
|
||||
on the filesystem. This can be done with with the following setting
|
||||
|
||||
mailbox_transport = lmtp:unix:/var/spool/postfix/private/delivery.sock
|
||||
|
||||
or for virtal accounts with
|
||||
|
||||
virtual_transport = lmtp:unix:/var/spool/postfix/private/delivery.sock
|
||||
|
||||
Now postfix will use the socket to deliver the mails.
|
||||
|
||||
It is also possible to use other services between these two like dspam. In my
|
||||
case postfix delivers the mails to dspam and that will deliver them to dovecot.
|
||||
|
||||
For dovecot change the path of the socket to something dspam can reach. I'm
|
||||
using `/var/run/delivery.sock`.
|
||||
|
||||
Then change the dspam.conf to use that socket as a delivery host
|
||||
|
||||
DeliveryProto LMTP
|
||||
DeliveryHost "/var/run/delivery.sock"
|
||||
|
||||
As postfix needs to speak to dspam, we set dspam to create a socket too
|
||||
|
||||
ServerMode auto
|
||||
ServerDomainSocketPath "/var/run/dspam.sock"
|
||||
|
||||
`ServerMode` should be set to either `auto` or `standard`.
|
||||
|
||||
Now the only thing left to do is to tell postfix to use that socket to deliver
|
||||
its mails. For that, set the options from before to the new socket
|
||||
|
||||
virtual_transport = lmtp:unix:/var/run/dspam.sock
|
||||
|
||||
And with that, we have a nice setup where only dovecot stores mails.
|
||||
|
||||
[lmtp socket]: http://wiki2.dovecot.org/LMTP
|
||||
[dovecot-lmtp]: http://wiki2.dovecot.org/HowTo/PostfixDovecotLMTP
|
@ -0,0 +1,18 @@
|
||||
+++
|
||||
title = "choosing a firewall on freebsd"
|
||||
date = "2014-01-06T16:15:58+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
As I was setting up a firewall on my freebsd server I had to choose between one of the three firewalls available.
|
||||
|
||||
There is the freebsd developed firewall ipfw, the older filter ipf and the openbsd developed pf. As for features they have all their advantages and disadvantages. Best is to read [firewall documentation](https://www.freebsd.org/doc/handbook/firewalls-apps.html) of freebsd.
|
||||
|
||||
In the end my decision was to use pf for one reason - it can check the syntax before it is running any command. This was very important for me, as I'm not able to get direct access to the server easily.
|
||||
|
||||
ipf and ipfw both get initialized by a series of shell commands. That means the firewall controll program gets called by a series of commands. Is one command failing, the script may fail and the firewall ends up in a state undefined by the script. You may not even get into the server by ssh anymore and needs a reboot.
|
||||
|
||||
This is less of a problem with pf, as it does a syntax check on the configuration beforehand. It is not possible to throw pf into an undefined state because of a typo. So the only option left would be to forget ssh access or anything else.
|
||||
|
||||
I found the syntax of pf a bit weird, but I got a working firewall up and running which seems to work pretty well. ipfw looks similar, so maybe I try it the next time.
|
@ -0,0 +1,34 @@
|
||||
+++
|
||||
title = "block mails for unknown users"
|
||||
date = "2014-01-16T09:01:01+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Postfix' policy system is a bit confusing. There are so many knobs to avoid
|
||||
receiving mails which do not belong to any account on the system and most of
|
||||
them check multiple things at once, which makes building restrictions a bit of
|
||||
a gamble.
|
||||
|
||||
After I finally enabled the security reports in freebsd the amount of mails in
|
||||
the mailqueue hit me. After some further investigation I found even error
|
||||
messages of dspam, having trouble to rate spam for receivers which were not
|
||||
even in the system.
|
||||
|
||||
To fix it, I read into the postfix documentation again, build new and hopefully
|
||||
better restrictions. The result was even more spam getting through.
|
||||
After a day went by and my head was relaxed I read the documentation again and
|
||||
found the following in the
|
||||
[postfix manual](http://www.postfix.org/VIRTUAL_README.html#in_virtual_other)
|
||||
|
||||
> The `virtual_mailbox_maps` parameter specifies the lookup table with all valid
|
||||
> recipient addresses. The lookup result value is ignored by Postfix.
|
||||
|
||||
So instead of one of the many restrictions a completely unrelated parameter is
|
||||
responsible for blocking mails for unknown users. Another parameter related is
|
||||
[`smtpd_reject_unlisted_recipient`](http://www.postfix.org/postconf.5.html#smtpd_reject_unlisted_recipient).
|
||||
This is the only other place I could find, which listed `virtual_mailbox_maps`
|
||||
and I only found it when looking for links for this blog entry.
|
||||
|
||||
So if you ever have problems with receiving mails for unknown users, check
|
||||
`smtpd_reject_unlistef_recipient` and `virtual_mailbox_maps`.
|
@ -0,0 +1,29 @@
|
||||
+++
|
||||
title = "overhaul of the blog"
|
||||
date = "2014-02-19T09:42:11+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
The new blog is finally online. It took us nearly more than a year to finally get the new design done.
|
||||
|
||||
First we replaced thin with puma. Thin was getting more and more a bother and didn't really work
|
||||
reliable anymore. Because of the software needed, it was pinned to a specific version of rack,
|
||||
thin, rubinius and some other stuff. Changing one dependency meant a lot of working getting it
|
||||
going again.
|
||||
Puma together with rubinius make a pretty nice stack and in all the time it worked pretty well.
|
||||
We will see, how good it can handle running longer than some hours.
|
||||
|
||||
The next part we did was throw out sinatra and replace it with [zero](https://github.com/libzero/zero),
|
||||
our own toolkit for building small web applications.
|
||||
But instead of building yet another object spawning machine, we tried something different.
|
||||
The new blog uses a chain of functions to process a request into a response. This has the
|
||||
advantage that the number of objects kept around for the livetime of a request is minimized,
|
||||
the stack level is smaller and in all it should now need much less memory to process a request.
|
||||
From the numbers, things are looking good, but we will see how it will behave in the future.
|
||||
|
||||
On the frontend part we minimized the layout further, but found some nice functionality. It is
|
||||
now possible to view one post after another through the same pagination mechanism. This should
|
||||
make a nice experience when reading more a number of posts one after another.
|
||||
|
||||
We hope you like the new design and will enjoy reading our stuff in the future too.
|
@ -0,0 +1,21 @@
|
||||
+++
|
||||
title = "[cfengine] log to syslog"
|
||||
date = "2014-02-24T21:51:39+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
When you want to start with cfengine, it is not exactly obvious how some stuff works. To make
|
||||
it easier for others, I will write about some stuff I find out in the process.
|
||||
|
||||
For the start, here is the first thing I found out. By default cfengine logs to files
|
||||
in the work directory. This can get a bit ugly, when the agent is running every 5min.
|
||||
As I use cf-execd, I added the option
|
||||
[executorfacility](https://cfengine.com/docs/3.5/reference-components-cfexecd.html#executorfacility)
|
||||
to the exed section.
|
||||
|
||||
body executor control {
|
||||
executorfacility => "LOG_LOCAL7";
|
||||
}
|
||||
|
||||
After that a restart of execd will result in logs appearing through syslog.
|
@ -0,0 +1,68 @@
|
||||
+++
|
||||
title = "scan to samba share with HP Officejet pro 8600"
|
||||
date = "2014-03-16T10:28:12+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Yesterday I bought a printer/scanner combination, a HP Officejet pro 8600. It
|
||||
has some nice functions included, but the most important for us was the ability
|
||||
to print to a network storage.
|
||||
As I did not find any documentation on how it is possible to get the printer to
|
||||
speak with a samba share, I will describe it here.
|
||||
|
||||
To get started I assume, that you already have a configured and running samba
|
||||
server.
|
||||
|
||||
The first step is to create a new system user and group. This user will used to
|
||||
create a login on the samba server for the scanner. The group will hold all users
|
||||
which should have access to the scanned documents. The following commands are for
|
||||
freebsd, but there should be an equivalent for any other system (like useradd).
|
||||
|
||||
pw groupadd -n scans
|
||||
pw useradd -n scans -u 10000 -c "login for scanner" -d /nonexistent -g scans -s /usr/sbin/nologin
|
||||
|
||||
We can already add the user to the samba user managament. Don't forget to set
|
||||
a strong password.
|
||||
|
||||
smbpasswd -a scans
|
||||
|
||||
As we have the group for all scan users, we can add every account which should
|
||||
have access
|
||||
|
||||
pw groupmod scans -m gibheer,stormwind
|
||||
|
||||
Now we need a directory to store the scans into. We make sure, that none other
|
||||
than group members can modify data in that directory.
|
||||
|
||||
zfs create rpool/export/scans
|
||||
chown scans:scans /export/scans
|
||||
chmod 770 /export/scans
|
||||
|
||||
Now that we have the system stuff done, we need to configure the share in the
|
||||
samba config. Add and modify the following part
|
||||
|
||||
[scans]
|
||||
comment = scan directory
|
||||
path = /export/scans
|
||||
writeable = yes
|
||||
create mode = 0660
|
||||
guest ok = no
|
||||
valid users = @scans
|
||||
|
||||
Now restart/reload the samba server and the share should be good to go.
|
||||
The only thing left is to configure the scanner to use that share. I did it over
|
||||
the webinterface. For that, go to `https://<yourscannerhere>/#hId-NetworkFolderAccounts`.
|
||||
The we add a new network folder with the following data:
|
||||
|
||||
* display name: scans
|
||||
* network path:
|
||||
* user name: scans
|
||||
* password: <enter password>
|
||||
|
||||
In the next step, you can secure the network drive with a pin. In the third step
|
||||
you can set the default scan settings and now you are done.
|
||||
Safe and test the settings and everything should work fine. The first scan will
|
||||
be named scan.pdf and all following have an id appended. Too bad there isn't a
|
||||
setting to append a timestamp instead. But it is still very nice t o be able to
|
||||
scan to a network device.
|
@ -0,0 +1,36 @@
|
||||
+++
|
||||
title = "no cfengine anymore"
|
||||
date = "2014-03-16T10:51:52+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
I thought I could write more good stuff about cfengine, but it had some pretty
|
||||
serious issues for me.
|
||||
|
||||
The first issue is the documentation. There are two documents available. One for
|
||||
an older version but very well written and a newer one which is a nightmare to
|
||||
navigate. I would use the older version, if it would work all the time.
|
||||
|
||||
The second issue is that cfengine can destroy itself. cfengine is one of the
|
||||
oldest configuration management systems and I didn't expect that.
|
||||
|
||||
Given a configuration error, the server will give out the files to the agents. As
|
||||
the agent pulls are configured in the same promise files as the rest of the
|
||||
system an error in any file will result in the agent not being able to pull any
|
||||
new version.
|
||||
|
||||
Further is the syntax not easy at all and has some bogus limitations. For example
|
||||
it is not allowed to name a promise file with a dash. But instead of a warning
|
||||
or error, cfengine just can't find the file.
|
||||
|
||||
This is not at all what I expect to get.
|
||||
|
||||
What I need is a system, which can't deactivate itself or even better, just runs
|
||||
on a central server. I also didn't want to run weird scripts just to get ruby
|
||||
compiled on the system to setup the configuration management. In my eyes, that
|
||||
is part of the job of the tool.
|
||||
|
||||
The only one I found which can handle that seems to be ansible. It is written
|
||||
in python and runs all commands remote with the help of python or in a raw mode.
|
||||
The first tests also looked very promising. I will keep posting, how it is going.
|
@ -0,0 +1,98 @@
|
||||
+++
|
||||
title = "playing with go"
|
||||
date = "2014-04-04T22:39:45+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
For some weeks now I have been playing with Go, a programming language developed
|
||||
with support from google. I'm not really sure yet, if I like it or not.
|
||||
|
||||
The ugly things first - so that the nice things can be enjoyed longer.
|
||||
|
||||
Gos package management is probably one of the worst points of the language. It
|
||||
has an included system to load code from any repository system, but everything
|
||||
has to be versioned. The weird thing is that they forgot to make it possible to
|
||||
pin the dependencies to a specific version. Some projects are on the way to
|
||||
implement this feature, but it will probably take some time.
|
||||
|
||||
What I also miss a shell to test code and just try stuff. Go is a language which
|
||||
is compiled. I really like it for small code spikes, calculations and the like.
|
||||
I really hope they will include it sometime in the future, but I doubt it.
|
||||
|
||||
With that comes also a very strict project directory structure, which makes it
|
||||
nearly impossible to just open a project and code away. One has to move into
|
||||
the project structure.
|
||||
|
||||
The naming of functions and variables is strict too. Everything is bound to the
|
||||
package namespace by default. If the variable, type or function begins with a
|
||||
capital letter, it means that the object is exported and can be used from other
|
||||
packages.
|
||||
|
||||
// a public function
|
||||
func FooBar() {
|
||||
}
|
||||
|
||||
// not a public function
|
||||
func fooBar() {
|
||||
}
|
||||
|
||||
Coming from other programming languages, it might be a bit irritating and I still
|
||||
don't really like the strictness, but my hands learned the lesson and mostly
|
||||
capitalize it for me.
|
||||
|
||||
Now the most interesting part for me is, that I can use Go very easily. I have
|
||||
to look for much of the functions, but the syntax is very easy to learn. Just
|
||||
for fun I built a small cassandra benchmark in a couple of hours and it works
|
||||
very nice.
|
||||
|
||||
After some adjustments it even ran in parallel and is now stressing a cassandra
|
||||
cluster for more than 3 weeks. That was a very nice experience.
|
||||
|
||||
Starting a thread in Go is surprisingly easy. There is nothing much needed to
|
||||
get it started.
|
||||
|
||||
go function(arg2, arg2)
|
||||
|
||||
It is really nice to just include a small two letter command to get the function
|
||||
to run in parallel.
|
||||
|
||||
Go also includes a feature I wished for some time in Ruby. Here is an example
|
||||
of what I mean
|
||||
|
||||
def foo(arg1)
|
||||
return unless arg1.respond_to?(:bar)
|
||||
do_stuff
|
||||
end
|
||||
|
||||
What this function does is test the argument for a specific method. Essentially
|
||||
it is an interface without a name. For some time I found that pretty nice to
|
||||
ask for methods instead of some weird name someone put behind the class name.
|
||||
|
||||
The Go designers found another way for the same problem. They called them
|
||||
also interfaces, but they work a bit differently. The same example, but this
|
||||
time in Go
|
||||
|
||||
type Barer interface {
|
||||
func Bar()
|
||||
}
|
||||
|
||||
func foo(b Bar) {
|
||||
do_stuff
|
||||
}
|
||||
|
||||
In Go, we give our method constraint a name and use that in the function
|
||||
definition. But instead of adding the name to the struct or class like in Java,
|
||||
only the method has to be implemented and the compiler takes care of the rest.
|
||||
|
||||
But the biggest improvement for me is the tooling around Go. They deliver it
|
||||
with a formatting tool, a documentation and a test tool. And everything works
|
||||
blazingly fast. Even the compiler can run in mere seconds instead of minutes.
|
||||
It actually makes fun to have such a fast feedback cycle with a compiled
|
||||
language.
|
||||
|
||||
So for me, Go is definitely an interesting but not perfect project. The language
|
||||
definition is great and the tooling is good. But the strict and weird project
|
||||
directory structure and project management is currently a big problem for me.
|
||||
|
||||
I hope they get that figured out and then I will gladly use Go for some stuff.
|
@ -0,0 +1,34 @@
|
||||
+++
|
||||
title = "learning the ansible way"
|
||||
date = "2014-08-08T19:13:07+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Some weeks ago I read a [blog post](http://palcu.blogspot.se/2014/06/dotfiles-and-dev-tools-provisioned-by.html) about rolling out your configs with ansible as a way to learn how to use it. The posts wasn't full of information how to do it, but his repository was a great inspiration.
|
||||
|
||||
As I stopped [using cfengine](/post/107) and instead wanted to use ansible, that was a great opportunity to further learn how to use it and I have to say, it is a really nice experience. Apart from a bunch configs I find every now and then, I have everything in my [config repository](https://github.com/Gibheer/configs/tree/501c2887b74b7447803e1903bd7c0781d911d363/playbooks).
|
||||
|
||||
The config is split at the moment between servers and workstations, but using an inventory file with localhost. As I mostly use freebsd and archlinux, I had to set the python interpreter path to different locations. There are two ways to do that in ansible. The first is to add it to the inventory
|
||||
|
||||
[hosts]
|
||||
localhost
|
||||
|
||||
[hosts:vars]
|
||||
ansible_connection=local
|
||||
ansible_python_interpreter=/usr/local/bin/python2
|
||||
|
||||
and the other is to set it in the playbook
|
||||
|
||||
- hosts: hosts
|
||||
vars:
|
||||
ansible_python_interpreter: /usr/local/bin/python2
|
||||
roles:
|
||||
- vim
|
||||
|
||||
The latter has the small disadvantage, that running plain ansible is not possible. Ansible in the command and check mode also needs an inventory and uses the variables too. But if they are not stated there, ansible has no idea what to do. But at the moment, it isn't so much a problem.
|
||||
Maybe that problem can be solved by using a [dynamic inventory](http://docs.ansible.com/intro_dynamic_inventory.html#other-inventory-scripts).
|
||||
|
||||
What I can definitely recommend is using roles. These are descriptions on what to do and can be filled with variables from the outside. I have used them bundle all tasks for one topic. Then I can unclude these for the hosts I want them to have, which makes rather nice playbooks. One good example is my [vim config](https://github.com/Gibheer/configs/tree/501c2887b74b7447803e1903bd7c0781d911d363/playbooks/roles/vim), as it shows [how to use lists](https://github.com/Gibheer/configs/blob/501c2887b74b7447803e1903bd7c0781d911d363/playbooks/roles/vim/tasks/main.yml).
|
||||
|
||||
All in all I'm pretty impressed how well it works. At the moment I'm working on a way to provision jails automatically, so that I can run the new server completely through ansible. Should make moving to a new server in the fututre much easier.
|
@ -0,0 +1,15 @@
|
||||
+++
|
||||
title = "Dokumentation fuer (Open)Solaris"
|
||||
date = "2009-08-11T13:14:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Durch ein Banner auf [pg-forum](http://pg-forum.de) bin ich eben auf die
|
||||
Seite bei Sun gelangt, wo sich einige Dokumente in pdf-Form befinden
|
||||
über dtrace, Container, ZFS usw.
|
||||
|
||||
Da ist wahrscheinlich für jeden etwas dabei.
|
||||
|
||||
Der Link zur Seite ist: [Sun
|
||||
Dokumentation](http://uk.sun.com/practice/software/solaris/how_to_guide.jsp?cid=20090729DE_TACO_SHTG_D_0001)
|
@ -0,0 +1,63 @@
|
||||
+++
|
||||
title = "range types in postgres"
|
||||
date = "2014-08-08T20:23:36+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Nearly two years ago, Postgres got a very nice feature - [range types][range-types]. These are available for timestamps, numerics and integers.
|
||||
The problem is, that till now, I didn't have a good example what one could do with it. But today someone gave me a quest to use it!
|
||||
|
||||
His problem was, that they had id ranges used by customers and they weren't sure if they overlapped. The table looked something like this:
|
||||
|
||||
create table ranges(
|
||||
range_id serial primary key,
|
||||
lower_bound bigint not null,
|
||||
upper_bound bigint not null
|
||||
);
|
||||
|
||||
With data like this
|
||||
|
||||
insert into ranges(lower_bound, upper_bound) values
|
||||
(120000, 120500), (123000, 123750), (123750, 124000);
|
||||
|
||||
They had something like 40,000 rows of that kind. So this was perfect for using range type queries.
|
||||
|
||||
To find out, if there was an overlap, I used the following query
|
||||
|
||||
select *
|
||||
from ranges r1
|
||||
join ranges r2
|
||||
on int8range(r1.lower_bound, r1.upper_bound, '[]') &&
|
||||
int8range(r2.lower_bound, r2.upper_bound, '[]')
|
||||
where r1.range_id != r2.range_id;
|
||||
|
||||
In this case, int8range takes two bigint values and converts it to a range. The string `[]` defines if the two values are included or excluded in the range. In this example, they are included.
|
||||
The output for this query looked like the following
|
||||
|
||||
range_id │ lower_bound │ upper_bound │ range_id │ lower_bound │ upper_bound
|
||||
──────────┼─────────────┼─────────────┼──────────┼─────────────┼─────────────
|
||||
2 │ 123000 │ 123750 │ 3 │ 123750 │ 124000
|
||||
3 │ 123750 │ 124000 │ 2 │ 123000 │ 123750
|
||||
(2 rows)
|
||||
|
||||
Time: 0.317 ms
|
||||
|
||||
But as I said, the table had 40,000 values. That means the set to filter has a size of 1.6 billion entries. The computation of the query took a very long time, so I used another nice feature of postgres - transactions.
|
||||
|
||||
The idea was to add a temporary index to get the computation done in a much faster time (the index is also described in the [documentation][index]).
|
||||
|
||||
begin;
|
||||
create index on ranges using gist(int8range(lower_bound, upper_bound, '[]'));
|
||||
select *
|
||||
from ranges r1
|
||||
join ranges r2
|
||||
on int8range(r1.lower_bound, r1.upper_bound, '[]') &&
|
||||
int8range(r2.lower_bound, r2.upper_bound, '[]')
|
||||
where r1.range_id != r2.range_id;
|
||||
rollback;
|
||||
|
||||
The overall runtime in my case was 300ms, so the writelock wasn't that much of a concern anymore.
|
||||
|
||||
[range-types]: http://www.postgresql.org/docs/current/static/rangetypes.html
|
||||
[index]: http://www.postgresql.org/docs/current/static/rangetypes.html#RANGETYPES-INDEXING
|
@ -0,0 +1,71 @@
|
||||
+++
|
||||
title = "common table expressions in postgres"
|
||||
date = "2014-10-13T21:45:31+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Four weeks ago I was askes to show some features of PostgreSQL. In that
|
||||
presentation I came up with an interesting statement, with which I could show
|
||||
nice feature.
|
||||
|
||||
What I'm talking about is the usage of [common table expressions (or short CTE)][CTE]
|
||||
and explain.
|
||||
|
||||
Common table expressions create a temporary table just for this query. The
|
||||
result can be used anywhere in the rest of the query. It is pretty useful to
|
||||
group sub selects into smaller chunks, but also to create DML statements which
|
||||
return data.
|
||||
|
||||
A statement using CTEs can look like this:
|
||||
|
||||
with numbers as (
|
||||
select generate_series(1,10)
|
||||
)
|
||||
select * from numbers;
|
||||
|
||||
But it gets even nicer, when we can use this to move data between tables, for
|
||||
example to archive old data.
|
||||
|
||||
Lets create a table and an archive table and try it out.
|
||||
|
||||
$ create table foo(
|
||||
id serial primary key,
|
||||
t text
|
||||
);
|
||||
$ create table foo_archive(
|
||||
like foo
|
||||
);
|
||||
$ insert into foo(t)
|
||||
select generate_series(1,500);
|
||||
|
||||
The [like option][like option] can be used to copy the table structure to a new table.
|
||||
|
||||
The table `foo` is now filled with data. Next we will delete all rows where the
|
||||
modulus 25 of the ID resolves to 0 and insert the row to the archive table.
|
||||
|
||||
$ with deleted_rows as (
|
||||
delete from foo where id % 25 = 0 returning *
|
||||
)
|
||||
insert into foo_archive select * from deleted_rows;
|
||||
|
||||
Another nice feature of postgres is the possibility to get an explain from a
|
||||
delete or insert. So when we prepend explain to the above query, we get this
|
||||
explain:
|
||||
|
||||
QUERY PLAN
|
||||
───────────────────────────────────────────────────────────────────
|
||||
Insert on foo_archive (cost=28.45..28.57 rows=6 width=36)
|
||||
CTE deleted_rows
|
||||
-> Delete on foo (cost=0.00..28.45 rows=6 width=6)
|
||||
-> Seq Scan on foo (cost=0.00..28.45 rows=6 width=6)
|
||||
Filter: ((id % 25) = 0)
|
||||
-> CTE Scan on deleted_rows (cost=0.00..0.12 rows=6 width=36)
|
||||
(6 rows)
|
||||
|
||||
This explain shows, that a sequence scan is done for the delete and grouped into
|
||||
the CTE deleted_rows, our temporary view. This is then scanned again and used to
|
||||
insert the data into foo_archive.
|
||||
|
||||
[CTE]: http://www.postgresql.org/docs/current/static/queries-with.html
|
||||
[like option]: http://www.postgresql.org/docs/current/static/sql-createtable.html
|
@ -0,0 +1,156 @@
|
||||
+++
|
||||
title = "using unbound and dnsmasq"
|
||||
date = "2014-12-09T22:13:58+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
After some time of using an [Almond](http://www.securifi.com/almond) as our router
|
||||
and always having trouble with disconnects, I bought a small [apu1d4](http://www.pcengines.ch/apu1d4.htm),
|
||||
an AMD low power board, as our new router.
|
||||
It is now running FreeBSD and is very stable. Not a single connection was
|
||||
dropped yet.
|
||||
|
||||
As we have some services in our network, like a fileserver and a printer, we
|
||||
always wanted to use names instead of IPs, but not a single router yet could
|
||||
provide that. So this was the first problem I solved.
|
||||
|
||||
FreeBSD comes with unbound preinstalled. Unbound is a caching DNS resolver, which
|
||||
helps answer DNS queries faster, when they were already queried before. I
|
||||
wanted to use unbound as the primary source for DNS queries, as the caching
|
||||
functionality is pretty nice.
|
||||
Further I wanted an easy DHCP server, which would also function as a DNS server.
|
||||
For that purpose dnsmasq fits best. There are also ways to use dhcpd, bind and
|
||||
some glue to get the same result, but I wanted as few services as possible.
|
||||
|
||||
So my setup constellation looks like this:
|
||||
|
||||
client -> unbound -> dnsmasq
|
||||
+-----> ISP dns server
|
||||
|
||||
For my internal tld, I will use zero. The dns server is called cerberus.zero and
|
||||
has the IP 192.168.42.2. The network for this setup is 192.168.42.0/24.
|
||||
|
||||
## configuring unbound
|
||||
|
||||
For this to work, first we configure unbound to make name resolution work at
|
||||
all. Most files already have pretty good defaults, so we will overwrite these
|
||||
with a file in `/etc/unbound/conf.d/`, in my case `/etc/unbound/conf.d/zero.conf`.
|
||||
|
||||
server:
|
||||
interface: 127.0.0.1
|
||||
interface: 192.168.42.2
|
||||
do-not-query-localhost: no
|
||||
access-control: 192.168.42.0/24 allow
|
||||
local-data: "cerberus. 86400 IN A 192.168.42.2"
|
||||
local-data: "cerberus.zero. 86400 IN A 192.168.42.2"
|
||||
local-data: "2.42.168.192.in-addr.arpa 86400 IN PTR cerberus.zero."
|
||||
local-zone: "42.168.192.in-addr.arpa" nodefault
|
||||
domain-insecure: "zero"
|
||||
|
||||
forward-zone:
|
||||
name: "zero"
|
||||
forward-addr: 127.0.0.1@5353
|
||||
|
||||
forward-zone:
|
||||
name: "42.168.192.in-addr.arpa."
|
||||
forward-addr: 127.0.0.1@5353
|
||||
|
||||
So what happens here is the following. First we tell unbound, on which addresses
|
||||
it should listen for incoming queries.
|
||||
Next we staate, that querying dns servers in localhost is totally okay. This is
|
||||
needed to later be able to resolve addresses on the local dnsmasq. If your dnsmasq
|
||||
is running on a different machine, you can leave this out.
|
||||
With `access-control` we allow the network `192.168.42.0/24` to query the dns
|
||||
server.
|
||||
The next three lines tell unbound, that the name cerberus and cerberus.zero are one
|
||||
and the same machine, the DNS server. Without these two lines unbound would not
|
||||
resolve the name of the local server, even if its name would be stated in `/etc/hosts`.
|
||||
With the last line we enable name resolution for the local network.
|
||||
The key domain-insecure tells unbound, that this domain has no support for DNSSEC.
|
||||
DNSSEC is enabled by default on unbound.
|
||||
|
||||
The two `forward-zone` entries tell unbound, where it should ask for queries regarding
|
||||
the `zero` tld and the reverse entries of the network. The address in this case points
|
||||
to the dnsmasq instance. In my case, that is running on localhost and port 5353.
|
||||
|
||||
Now we can add unbound to `/etc/rc.conf` and start unbound for the first time
|
||||
with the following command
|
||||
|
||||
$ sysrc local_unbound_enable=YES && service local_unbound start
|
||||
|
||||
Now you should be able to resolve the local hostname already
|
||||
|
||||
$ host cerberus.zero
|
||||
cerberus.zero has address 192.168.42.2
|
||||
|
||||
## configuring dnsmasq
|
||||
|
||||
The next step is to configure dnsmasq, so that it provides DHCP and name resolution
|
||||
for the network. When adjusting the config, please read the comments for each
|
||||
option in your config file carefully.
|
||||
You can find an example config in `/usr/local/etc/dnsmasq.conf.example`. Copy
|
||||
it to `/usr/local/etc/dnsmasq.conf` and open it in your editor:
|
||||
|
||||
port=5353
|
||||
domain-needed
|
||||
bogus-priv
|
||||
no-resolv
|
||||
no-hosts
|
||||
local=/zero/
|
||||
except-interface=re0
|
||||
bind-interfaces
|
||||
local-service
|
||||
expand-hosts
|
||||
domain=zero
|
||||
dhcp-range=192.168.42.11,192.168.42.200,255.255.255.0,48h
|
||||
dhcp-option=option:router,192.168.42.2
|
||||
dhcp-option=option:dns-server,192.168.42.2
|
||||
dhcp-host=00:90:f5:f0:fc:13,0c:8b:fd:6b:04:9a,sodium,192.168.42.23,96h
|
||||
|
||||
First we set the port to 5353, as defined in the unbound config. On this port
|
||||
dnsmasq will listen for incoming dns requests.
|
||||
The next two options are to avoid forwarding dns requests needlessly.
|
||||
The option `no-resolv` avoids dnsmasq knowning of any other dns server. `no-hosts`
|
||||
does the same for `/etc/hosts`. Its sole purpose is to provide DNS for the
|
||||
local domain, so it needn't to know.
|
||||
|
||||
The next option tells dnsmasq for which domain it is responsible. It will also
|
||||
avoid answering requests for any other domain.
|
||||
|
||||
`except-interfaces` tells dnsmasq on which interfaces _not_ to listen on. You
|
||||
should enter here all external interfaces to avoid queries from the wide web
|
||||
detecting hosts on your internal network.
|
||||
The option `bind-interfaces` will try to listen only on the interfaces allowed
|
||||
instead of listening on all interfaces and filtering the traffic. This makes
|
||||
dnsmasq a bit more secure, as not listening at all is better than listening.
|
||||
|
||||
The two options `expand-hosts` and `domain=zero` will expand all dns requests
|
||||
with the given domain part, if it is missing. This way, it is easier to resolv
|
||||
hosts in the local domain.
|
||||
|
||||
The next three options configure the DHCP part of dnsmasq. First is the range.
|
||||
In this example, the range starts from `192.168.42.11` and ends in `192.168.42.200`
|
||||
and all IPs get a 48h lease time.
|
||||
So if a new hosts enters the network, it will be given an IP from this range.
|
||||
The next two lines set options sent with the DHCP offer to the client, so it
|
||||
learns the default route and dns server. As both is running on the same machine
|
||||
in my case, it points to the same IP.
|
||||
|
||||
Now all machines which should have a static name and/or IP can be set through
|
||||
dhcp-host lines. You have to give the mac address, the name, the IP and the
|
||||
lease time.
|
||||
There are many examples in the example dnsmasq config, so the best is to read
|
||||
these.
|
||||
|
||||
When your configuration is done, you can enable the dnsmasq service and start it
|
||||
|
||||
$ sysrc dnsmasq_enable=YES && service dnsmasq start
|
||||
|
||||
When you get your first IP, do the following request and it should give you
|
||||
your IP
|
||||
|
||||
$ host $(hostname)
|
||||
sodium.zero has address 192.168.42.23
|
||||
|
||||
With this, we have a running DNS server setup with DHCP.
|
@ -0,0 +1,44 @@
|
||||
+++
|
||||
title = "setting zpool features"
|
||||
date = "2014-12-10T13:40:27+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Before SUN was bought by Oracle, OpenSolaris had ever newer versions and
|
||||
upgrading was just an
|
||||
|
||||
$ zpool upgrade rpool
|
||||
|
||||
away. But since then, the open source version of ZFS gained feature flags.
|
||||
|
||||
POOL FEATURE
|
||||
---------------
|
||||
tank1
|
||||
multi_vdev_crash_dump
|
||||
enabled_txg
|
||||
hole_birth
|
||||
extensible_dataset
|
||||
embedded_data
|
||||
bookmarks
|
||||
filesystem_limits
|
||||
|
||||
If you want to enable only one of these features, you may have already hit the
|
||||
problem, that `zpool upgrade` can only upgrade one pool or all.
|
||||
|
||||
The way to go is to use `zpool set`. Feature flags are options on the pool and
|
||||
can also be listed with `zpool get`.
|
||||
|
||||
$ zpool get all tank1 | grep feature
|
||||
tank1 feature@async_destroy enabled local
|
||||
tank1 feature@empty_bpobj active local
|
||||
tank1 feature@lz4_compress active local
|
||||
tank1 feature@multi_vdev_crash_dump disabled local
|
||||
...
|
||||
|
||||
Enabling a feature, for example _multi_vdev_crash_dump_, would then be
|
||||
|
||||
$ zpool set feature@multi_vdev_crash_dump=enabled tank1
|
||||
|
||||
It will then disappear from the `zpool upgrade` output and be set to enabled
|
||||
active in `zpool get`.
|
@ -0,0 +1,14 @@
|
||||
+++
|
||||
title = "pgstats - vmstat like stats for postgres"
|
||||
date = "2015-03-02T20:51:09+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Some weeks ago a tool got my attention - pgstats. It was mentioned in a [blog post](http://blog.guillaume.lelarge.info/index.php/post/2015/01/25/A-new-vmstat-like-tool-for-PostgreSQL), so I tried it out and it made a very good first impression.
|
||||
|
||||
Now version 1.0 was released. It can be found in [github](https://github.com/gleu/pgstats).
|
||||
|
||||
It is a small tool to get statistics from postgres in intervals, just like with iostat, vmstat and other *stat tools. It has a number of modules to get these, for example for databases, tables, index usage and the like.
|
||||
|
||||
If you are running postgres, you definitely should take a look at it.
|
@ -0,0 +1,128 @@
|
||||
+++
|
||||
title = "minimal nginx configuration"
|
||||
date = "2015-03-25T22:11:20+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
As I was asked today, how I manage the nginx setup, I thought I write it down.
|
||||
|
||||
The configuration was inpsired by the [blog entry of Zach Orr](http://blog.zachorr.com/nginx-setup/)
|
||||
(looks like the blog post is gone since 2014).
|
||||
The setup consists of one main configuration and multiple domain specific
|
||||
configuration files which get sourced in the main config.
|
||||
If a domain is using certificates, these are pulled in in their respective files.
|
||||
|
||||
I will leave out the performance stuff to make the config more readable. As the
|
||||
location of the config files differs per platform, I will use $CONF_DIR as a
|
||||
placeholder.
|
||||
|
||||
## main configuration
|
||||
|
||||
The main configuration `$CONF_DIR/nginx.conf` first sets some global stuff.
|
||||
|
||||
# global settings
|
||||
user www www;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
This will take care of dropping the privileges after the start to the *www* user
|
||||
group.
|
||||
|
||||
Next is the http section, which sets the defaults for all server parts.
|
||||
|
||||
http {
|
||||
include mime.types;
|
||||
default_type application/octet-stream;
|
||||
charset UTF-8;
|
||||
|
||||
# activate some modules
|
||||
gzip on;
|
||||
# set some defaults for modules
|
||||
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
|
||||
|
||||
include sites/*.conf;
|
||||
}
|
||||
|
||||
This part sets some default options for all server sections and helps to make the
|
||||
separate configuration easier.
|
||||
In this example the mime types are included (a large file with mime type definitions),
|
||||
the default charset and mime type is set.
|
||||
|
||||
In this section we can also active modules like gzip ([see gzip on nginx](http://nginx.org/en/docs/http/ngx_http_gzip_module.html)) or set some options for modules like ssl ([see ssl on nginx](http://nginx.org/en/docs/http/ngx_http_ssl_module.html)).
|
||||
|
||||
The last option is to include more config files from the sites directory. This is
|
||||
the directive which makes it possible to split up the configs.
|
||||
|
||||
## server section config
|
||||
|
||||
The server section config may look different for each purpose. Here are some
|
||||
smaller config files just to show, what is possible.
|
||||
|
||||
### static website
|
||||
|
||||
For example the file *$CONF_DIR/sites/static.zero-knowledge.org.conf* looks like this:
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name static.zero-knowledge.org;
|
||||
|
||||
location / {
|
||||
root /var/srv/static.zero-knowledge.org/htdocs;
|
||||
index index.html;
|
||||
}
|
||||
}
|
||||
|
||||
In this case a domain is configured delivering static content from the directory
|
||||
`/var/src/static.zero-knowledge.org/htdocs` on port 80 for the domain *static.zero-knowledge.org`.
|
||||
If the root path is called in the browser, nginx will look for the *index.html* to show.
|
||||
|
||||
### reverse proxy site
|
||||
|
||||
For a reverse proxy setup, the config *$CONF_DIR/sites/zero-knowledge.org.conf* might look like this.
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name zero-knowledge.org;
|
||||
|
||||
location / {
|
||||
proxy_pass http://unix:/tmp/reverse.sock;
|
||||
include proxy_params;
|
||||
}
|
||||
}
|
||||
|
||||
In this case, nginx will also listen on port 80, but for the host zero-knowledge.org.
|
||||
All incoming requests will be forwarded to the local unix socket */tmp/reverse.sock*.
|
||||
You can also define IPs and ports here, but for an easy setup, unix sockets might be
|
||||
easier.
|
||||
The parameter `include proxy_params;` includes the config file proxy_params to
|
||||
set some headers when forwarding the request, for example *Host* or *X-Forwarded-For*.
|
||||
There should be a number of config files already included with the nginx package,
|
||||
so best is to tkae a look in $CONF_DIR.
|
||||
|
||||
### uwsgi setup
|
||||
|
||||
As I got my graphite setup running some days ago, I can also provide a very bare
|
||||
uwsgi config, which actually looks like the reverse proxy config.
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name uwsgi.zero-knowledge.org;
|
||||
|
||||
location / {
|
||||
uwsgi_pass uwsgi://unix:/tmp/uwsgi_graphite.sock;
|
||||
include uwsgi_params;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
So instead of `proxy_pass` `uwsgi_pass` is used to tell nginx, that it has to use
|
||||
the uwsgi format. Nginx will also include the uwsgi parameters, which is like the
|
||||
proxy_params file a collection of headers to set.
|
||||
|
||||
## conclusion
|
||||
|
||||
So this is my pretty minimal configuration for nginx. It helped me automate the
|
||||
configuration, as I just have to drop new config files in the directory and
|
||||
reload the server.
|
||||
|
||||
I hope you liked it and have fun.
|
@ -0,0 +1,8 @@
|
||||
+++
|
||||
title = "S.M.A.R.T. values"
|
||||
date = "2015-07-19T10:05:56+00:00"
|
||||
author = "Gibheer"
|
||||
draft = true
|
||||
+++
|
||||
|
||||
I wondered for some time, what all S.M.A.R.T. values mean and which of them could tell me, that my disk is failing. Finally I found a [wikipedia article](https://en.wikipedia.org/wiki/S.M.A.R.T.) which has a nice list of what each value means.
|
@ -0,0 +1,8 @@
|
||||
+++
|
||||
title = "S.M.A.R.T. values"
|
||||
date = "2015-07-19T10:06:19+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
I wondered for some time, what all S.M.A.R.T. values mean and which of them could tell me, that my disk is failing. Finally I found a [wikipedia article](https://en.wikipedia.org/wiki/S.M.A.R.T.) which has a nice list of what each value means.
|
@ -0,0 +1,107 @@
|
||||
+++
|
||||
title = "ssh certificates part 1"
|
||||
date = "2015-07-19T10:33:11+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
All of my infrastructure SSH access is handled with SSH certificates for more than a year now. As I am asked
|
||||
every now and then how it works, I will describe how it works in multiple blog posts.
|
||||
|
||||
This part will revolve around Client certificates.
|
||||
|
||||
What is it good for?
|
||||
--------------------
|
||||
|
||||
With general public key usage one can identify a user by his public key. These get
|
||||
put into an `~/authorized_keys` file and if a user presents the correct key, they
|
||||
are let onto the system.
|
||||
This approach works well, but it is a bit tricky to find out, which key was actually
|
||||
used. Restricting the user based on his key on any machine also requires to manage
|
||||
the authorized_keys with options.
|
||||
|
||||
Now SSH certificates on the client side grant the possibility to sign a public key
|
||||
and remove the requirement for an authorized keys file.
|
||||
The options can be set directly in the certificate and are active on every server
|
||||
this certificate is used with. As the certificate can also hold an identification
|
||||
string it is easier to see from the logs, which key for what purpose connected.
|
||||
The only thing to make this work is to set every server to trust the signee and
|
||||
no authorized keys file has to be managed anymore.
|
||||
|
||||
generating the CA
|
||||
-----------------
|
||||
|
||||
First we need a SSH key for the purpose of a CA. This should not be the same key
|
||||
as your normal key in a production environment.
|
||||
The key is generated any other key with ssh-keygen
|
||||
|
||||
ssh-keygen -t ed25519 -C CA -f ca.key
|
||||
|
||||
You can choose any key type you want, it works with all types and any type can
|
||||
sign any type.
|
||||
The `-C` flag adds a comment to the key.
|
||||
|
||||
Now we can sign a public key.
|
||||
|
||||
signing a user key
|
||||
------------------
|
||||
|
||||
First we sign a user public key `foouser.pub`.
|
||||
|
||||
ssh-keygen \
|
||||
-s ca.key \
|
||||
-I 'foouser' \
|
||||
-n foouser \
|
||||
foouser.pub
|
||||
|
||||
Now what do all these options mean?
|
||||
|
||||
* `-s` defines the signing key
|
||||
* `-I` is an identification for the certificate. This also shows up in the
|
||||
auth.log on the server.
|
||||
* `-n` the principal, which in this case means the username this key will be
|
||||
allowed to login with.
|
||||
|
||||
To restrict the IP address for the public key, one can use the following line
|
||||
|
||||
-O source-address="127.0.0.1,192.168.42.1"
|
||||
|
||||
Any option from `ssh-keygen(1)` requires its own -O options, for example:
|
||||
|
||||
-O clear -O no-pty -O force-command="/opt/foo/bin/do_stufff"
|
||||
|
||||
A good source for further options is the ssh-keygen man page.
|
||||
|
||||
After the command was executed, a file foouser-cert.pub shows up. The content
|
||||
can be inspected using ssh-keygen again:
|
||||
|
||||
ssh-keygen -L -f foouser-cert.pub
|
||||
|
||||
To get the authentication working with this key, two steps have to be taken.
|
||||
First is to put the generated certificated in the same directory like the private
|
||||
key, so that the ssh agent will sent the certificate.
|
||||
Second is to put the CA onto the server, so that it will trust all created
|
||||
certificates.
|
||||
|
||||
This is done with the following option in the sshd_config
|
||||
|
||||
TrustedUserCAKeys /etc/ssh/ssh_user_certs
|
||||
|
||||
where the content of the file _/etc/ssh/ssh_user_certs_ is the ca public key.
|
||||
It is possible to put multiple CAs into that file.
|
||||
|
||||
Now one can connect to the server using the newly created key
|
||||
|
||||
ssh -vvv -I foouser <yourserver>
|
||||
|
||||
Which should print a line like
|
||||
|
||||
debug1: Server accepts key: pkalg ssh-ed25519-cert-v01@openssh.com blen 364
|
||||
debug1: Offering ED25519-CERT public key: /home/foouser/.ssh/id_ed25519
|
||||
debug3: sign_and_send_pubkey: ED25519-CERT SHA256:YYv18lDTPtT2g5vLylVQZiXQvknQNskCv1aCNaSZbmg
|
||||
|
||||
These three lines state for my session, that the server accepts certificates and
|
||||
that my certificate was sent.
|
||||
|
||||
With this, the first step to using SSH certificates is done. In the next post
|
||||
I will show how to use SSH certificates for the server side.
|
@ -0,0 +1,111 @@
|
||||
+++
|
||||
title = "ssh certificates part 2"
|
||||
date = "2015-07-28T21:20:49+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
This is the second part of the SSH certificate series, server side SSH
|
||||
certificates. You can find the first one [here](/118).
|
||||
|
||||
This post shows, what use server side certificates can be and how they can be
|
||||
created.
|
||||
|
||||
What use have server side certificates?
|
||||
---------------------------------------
|
||||
|
||||
SSH certificates on the host side are used to extend the ssh host keys. These
|
||||
can be used to better identify a running system, as multiple names can be
|
||||
provided in the certificate. This avoids the message of a wrong host key in a
|
||||
shared IP system, as all IPs and names can be provided.
|
||||
|
||||
SSH certificates can also help to identify freshly deployed systems in that the
|
||||
system gets certified directly after the deployment by a _build ca_.
|
||||
|
||||
signing a host key
|
||||
------------------
|
||||
|
||||
For this step, we need a CA key. How that can be generated was mentioned
|
||||
in the [first part](/118).
|
||||
We also need the host public key to sign. This can be either copied from /etc/ssh/ from
|
||||
the server or it can be fetch using _ssh-keyscan_.
|
||||
|
||||
ssh-keyscan foo.example.org
|
||||
|
||||
It can also take a parameter for a specific type
|
||||
|
||||
ssh-keyscan -t ed25519 foo.example.org
|
||||
|
||||
This is needed for some older versions of openssh, where ed25519 public keys
|
||||
were not fetched by default with _ssh-keyscan_.
|
||||
|
||||
The output returned looks like the following:
|
||||
|
||||
zero-knowledge.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPIP0JSsdP2pjtcYNcmqyPg6nLbMOjDbRf0YR/M2pu2N
|
||||
|
||||
The second and third field need to be put into a file, so that it can be used
|
||||
to generate the certificate.
|
||||
|
||||
A complete command would then look like this:
|
||||
|
||||
ssh-keyscan foo.example.org | awk '/ssh|ecdsa/ { print $2,$3 }' > host.pub
|
||||
|
||||
With the resulting file, we can now proceed to create the certificate.
|
||||
|
||||
ssh-keygen \
|
||||
-s ca.key \
|
||||
-V '+52w1d' \
|
||||
-I 'foohost' \
|
||||
-h \
|
||||
-n foo.example.org,bar.example.org \
|
||||
host.pub
|
||||
|
||||
The meaning of the options is:
|
||||
|
||||
* `-s` the key to use for signing (the ca)
|
||||
* `-V` interval the certificate is valid
|
||||
* `-I` the identity of the certificate (a name for the certificate)
|
||||
* `-h` flag to create a host certificate
|
||||
* `-n` all names the host is allowed to use (This list can also contain IPs)
|
||||
|
||||
The last option is the public key file to certify.
|
||||
|
||||
This results in a file host-cert.pub, which contains the certificate. It can be
|
||||
viewed like the SSH client certificate, with _ssh-keygen_.
|
||||
|
||||
ssh-keygen -L -f host-cert.pub
|
||||
|
||||
This file now has to be placed in the same directory like the public key on that
|
||||
host, with the same `-cert.pub` ending.
|
||||
|
||||
The last step on the server is to adjust the _sshd_config_, so that it includes
|
||||
the certificate. For that, add the following line for the fitting host key, for
|
||||
example:
|
||||
|
||||
HostCertificate /etc/ssh/ssh_host_ed25519_key-cert.pub
|
||||
|
||||
With a reload, it should load the certificate and make it available for
|
||||
authentication.
|
||||
|
||||
Now the only thing left to do is to tell the client, that it should trust the
|
||||
CA to identify systems. For that, the public key of the CA has to be added
|
||||
to the file `~/.ssh/known_hosts` in the following format
|
||||
|
||||
@cert-authority * <content of ca.pub>
|
||||
|
||||
The _*_ marks a filter, so different CAs can be trusted depending on the domain.
|
||||
|
||||
With this, you are able to connect to your server only using the certificate
|
||||
provided by the server. When connecting with debugging on, you should get output
|
||||
like the following:
|
||||
|
||||
$ ssh -v foo.example.com
|
||||
...
|
||||
debug1: Server host key: ssh-ed25519-cert-v01@openssh.com SHA256:+JfUty0G4i3zkWdPiFzbHZS/64S7C+NbOpPAKJwjyUs
|
||||
debug1: Host 'foo.example.com' is known and matches the ED25519-CERT host certificate.
|
||||
debug1: Found CA key in /home/foo/.ssh/known_hosts:1
|
||||
...
|
||||
|
||||
With the first part and now the second part done, you can already lock up your
|
||||
infrastructure pretty fine. In the next part, I will show some stuff I use
|
||||
to keep my infrastructure easily managable.
|
@ -0,0 +1,37 @@
|
||||
+++
|
||||
title = "das eklige Gesicht XMLs"
|
||||
date = "2009-08-28T12:23:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Eigentlich ist es nicht schwer eine Dokumentation zu schreiben. Es wird
|
||||
nur dann schwer, wenn verschiedene Ausgabeformate dabei herauskommen
|
||||
sollen.
|
||||
|
||||
Ich habe keine Ahnung wie das die verschiedenen “Open Source”-Projekte
|
||||
schaffen, aber für mich war es die riesen Chance mal Docbook
|
||||
ausprobieren zu können. Netbeans hat mit dem Milestone 1 der
|
||||
Version 6.8 auch wieder Support dafür, aber ich bin sehr
|
||||
enttäscht.
|
||||
|
||||
Docbook ist dazu gedacht Bücher, Texte und Artikel zu schreiben,
|
||||
ohne sich Gedanken um die Formatierung machen zu machen. Allerdings gibt
|
||||
es so einen riesen Berg an Tags, dass ich schon froh war, dass Netbeans
|
||||
mir ein Template zur verfügung gestellt hat.
|
||||
|
||||
Es artete aber immer mehr aus, als ich eine
|
||||
[Liste](http://docbook.org/tdg/en/html/itemizedlist.html) und
|
||||
später auch noch eine
|
||||
[Tabelle](http://docbook.org/tdg/en/html/table.html) bauen wollte.
|
||||
|
||||
Es sind dazu so viele Tags von nöten, über die man sich bei
|
||||
verschiedenen Quellen informieren muss, wie es denn gedacht ist, dass
|
||||
sie benutzt werden, dass man eigentlich mehr Zeit damit verbringt sich
|
||||
zu merken, wie die Liste funktioniert, als dass man Doku/Buch/…
|
||||
schreibt.
|
||||
|
||||
Aus Frust über diesen herben Rückschlag habe ich jetzt einfach
|
||||
angefangen mir meine eigene XML-Syntax zu bauen und gleich nebenher die
|
||||
XSLT zu schreiben. Das ist wahrscheinlich einfacher als sich in diesen
|
||||
Berg von Tags einzuarbeiten.
|
@ -0,0 +1,29 @@
|
||||
+++
|
||||
title = "Wie wenig braucht OpenSolaris?"
|
||||
date = "2009-08-28T12:38:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Aus lauter Spass wollten Stormwind und ich mal schauen, ob wir nicht
|
||||
opensolaris auf einem Sempron 2800+ mit 1GB Ram zu laufen bekommen
|
||||
können.
|
||||
|
||||
Wir haben die LiveCD gebootet und es hat erstmal eine kleine Ewigkeit
|
||||
gedauert, bis das System gestartet war. Aber wenn sich ZFS 1GB krallen
|
||||
will und nur ein GB da ist, ist das auch nicht verwunderlich. Als das
|
||||
System lief reagierte es eigentlich schnell und ohne Probleme.
|
||||
|
||||
Die Installation sollte auf eine alte 40GB IDE-Platte gehen. Allein das
|
||||
Kopieren hat um die 50min gebraucht. Als das System dann endlich auf
|
||||
eigenen Beinen stand war die Performance eigentlich okay, so wie wir es
|
||||
auch von unserem Quadcore mit 4GB Ram gewohnt waren.
|
||||
|
||||
Der richtige Test fängt aber erst an, wenn wir noch die zweite
|
||||
SATA-Platte zum laufen gebracht haben. Angeschlossen ist sie schon, aber
|
||||
sie wird nicht gefunden.
|
||||
|
||||
Wenn das Problem gelöst ist, dann bauen wir mal ein kleines RaidZ und
|
||||
schauen mal, wie gut sich dann opensolaris schlagen kann.
|
||||
|
||||
Wenn es so weit ist, gibt es hier natürlich wieder einen Nachschlag
|
@ -0,0 +1,20 @@
|
||||
+++
|
||||
title = "Rails mit Problemen unter OpenSolaris"
|
||||
date = "2009-08-31T06:09:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Und wieder zeigt sich, dass opensolaris etwas anders tickt als Linux.
|
||||
|
||||
Ich hab gestern mal Ruby on Rails installiert und brauchte für eine
|
||||
bestimmte Software die Version 2.2.2 In dieser Version hat Activesupport
|
||||
einen
|
||||
[Bug](https://rails.lighthouseapp.com/projects/8994/tickets/1396-framework-crashes-on-launch-on-solaris-with-invalid-encoding-asciiignoretranslit-utf-8),
|
||||
der nur unter opensolaris zu tage tritt. In dem fehlerhaften Teil soll
|
||||
das Encoding konvertiert werden, allerdings von einem Encoding, welches
|
||||
unter opensolaris nicht existiert.
|
||||
|
||||
Mit Rails 2.3 soll das Problem allerdings behoben sein. Die Version
|
||||
2.2.3, die in der Bugmeldung angesprochen wird, gibt es uebrigens (noch)
|
||||
nicht.
|
@ -0,0 +1,32 @@
|
||||
+++
|
||||
title = "einzelne Pakete unter OpenSolaris updaten"
|
||||
date = "2009-09-01T21:12:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Da will man mal sein System aktualisieren, aber ohne einen reboot
|
||||
vornehmen zu muessen und dann sowas
|
||||
|
||||
`No updates necessary for this image.`
|
||||
|
||||
Was hatte ich vor? Eigentlich wollte ich aus totaler Bequemlichkeit und
|
||||
weil ich es nicht anders gewohnt war, ein Update aller Pakete vornehmen,
|
||||
aber ohne einen reboot vornehmen zu muessen. Da mit `pkg list -u` jede
|
||||
Menge Pakete ausspuckte dachte ich mir, dass ich mittels eines kleinen
|
||||
Scriptes vielleicht auch einfach so ein Update hinbekommen koennte.
|
||||
|
||||
Wie sich aber nach einiger Suche herausstellte, hat opensolaris dafuer
|
||||
eine Sperre im Paket entire eingebaut
|
||||
|
||||
<source:sh>[gibheer-pandora] \~ pfexec pkg install zsh@0.5.11-0.122\
|
||||
pkg: The following package(s) violated constraints:\
|
||||
Package pkg:/SUNWzsh@0.5.11,5.11-0.122 conflicts with constraint in
|
||||
installed pkg:/entire: \
|
||||
Pkg SUNWzsh: Optional min\_version: 4.3.9,5.11-0.118 max version:
|
||||
4.3.9,5.11-0.118 defined by: pkg:/entire\
|
||||
</source>
|
||||
|
||||
Es ist also nicht moeglich ausserhalb der Version 118 auf eine andere
|
||||
Version umzusteigen, ausser es wird das gesamte System mittels
|
||||
image-update mitgezogen.
|
@ -0,0 +1,14 @@
|
||||
+++
|
||||
title = "OpenSolaris Wiki"
|
||||
date = "2009-09-07T21:10:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Da ich seit einiger Zeit den Link zu einem Solariswiki gesucht habe und
|
||||
nun endlich wieder gefunden habe, will ich ihn euch nicht vorenthalten
|
||||
und vor allem will ich ihn nicht wieder vergessen.
|
||||
|
||||
[Hier (www.solarisinternals.com)](http://www.solarisinternals.com/wiki)
|
||||
findet ihr ein super Wiki mit Anleitungen zu dtrace, Crossbow,
|
||||
Containern und vielen weiteren Sachen.
|
@ -0,0 +1,36 @@
|
||||
+++
|
||||
title = "OpenVPN unter OpenSolaris"
|
||||
date = "2009-09-09T19:32:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Momentan habe ich den Plan unter opensolaris mir einen ganzen Park an
|
||||
virtuellen Instanzen mittels
|
||||
[zones](http:/opensolaris.org/os/community/zones/) und
|
||||
[Crossbow](http://opensolaris.org/os/project/crossbow/) anzulegen.
|
||||
Allerdings habe ich nur eine IP fuer die Hauptmaschine, so dass ich die
|
||||
ganzen anderen Instanzen nicht nach aussen publizieren kann. Da aber nur
|
||||
ich Interesse an den Instanzen habe, ist es eigentlich auch nicht so
|
||||
wichtig, ob man die von aussen sehen kann oder nicht. Wichtiger ist mir,
|
||||
dass ich da ran komme. Also warum nicht einfach ein VPN-Netz aufbauen?
|
||||
|
||||
Das ganze sollte dann so aufgebaut sein, dass fuer jeden Dienst eine
|
||||
eigene Instanz laeuft und so die einzelnen Dienste an sich abgesichert
|
||||
sind. Zusaetzlich soll eine Instanz als Router fungieren und ankommende
|
||||
Anfragen, wenn noetig, an die jeweilige Instanz weiterleiten. Das sollte
|
||||
zwar nicht noetig sein, aber wenn es mal gebraucht wird, will ich das
|
||||
vorher bedacht haben :D Um an alle Server heran zu kommen, will ich mich
|
||||
ueber eine VPN-Verbindung in das virtuelle Netzwerk einklicken.
|
||||
|
||||
Als VPN-Software will ich OpenVPN einsetzen, da ich damit spaeter auch
|
||||
anderen Personen Zugang zum Netz geben koennte. Fuer Opensolaris gibt es
|
||||
allerdings keine Pakete, so dass man da selber Hand anlegen muss. Eine
|
||||
gute
|
||||
[Anleitung](http://blogs.reucon.com/srt/2008/12/17/installing_openvpn_on_opensolaris_2008_11.html)
|
||||
hat shl gefunden. Diese bezieht sich zwar noch auf den RC 16, allerdings
|
||||
funktioniert sie komplett mit dem RC19. Auch der Patch laesst sich
|
||||
problemlos einspielen und bereitet keine Probleme beim kompilieren.
|
||||
|
||||
Jetzt muss das VPN nur noch konfiguriert werden - eigentlich der
|
||||
schlimmste Teil daran. Ich melde mich wieder, sobald das geht.
|
@ -0,0 +1,121 @@
|
||||
+++
|
||||
title = "Heidelbeertigerarmstulpen"
|
||||
date = "2009-09-11T10:18:00+00:00"
|
||||
author = "Stormwind"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Hallo ihr lieben
|
||||
================
|
||||
|
||||
meine erste Armstulpe ist jetzt fertig und ich wurde auch gleich gebeten
|
||||
aufzuschreiben, wie ich die gemacht habe.
|
||||
|
||||

|
||||
|
||||
Das mache ich natürlich gerne. :)
|
||||
|
||||
Anleitung
|
||||
=========
|
||||
|
||||
Ich habe meine Armstulpen aus
|
||||
[Heidelbeertigerdrachenwolle](http://drachenwolle.de/sockenwolle-handgefaerbt/sockentiger/sockentiger-8.php)
|
||||
gemacht, aber Ihr seid natürlich frei jede Wolle zu nehmen, die Euch
|
||||
gefällt.
|
||||
|
||||
Dazu hab ich Nadeln mit einer Stärke von 2,5 mm genommen und eine
|
||||
Maschenprobe von x Maschen a x Reihen haben in etwa 12x12 cm ergeben,
|
||||
damit Ihr eine Ahnung davon habt, wie groß das ganze wird.
|
||||
|
||||
Das Bündchen
|
||||
============
|
||||
|
||||
Der gerdrehte Teil
|
||||
------------------
|
||||
|
||||
Ich habe mir für jede meiner 4 Nadeln im Nadelspiel 16 Maschen
|
||||
ausgerechnet. Macht 64 insgesamt.
|
||||
|
||||
Die sollen aber nicht gleich auf das Nadelspiel. Damit das Ende bzw.
|
||||
jetzt ja der Anfang (ich hab beim Oberarm angefangen und mich dann in
|
||||
Richtung hand vorgestrickt) so lustig gedreht wird brauchen wir erst
|
||||
einmal als Grundlage 5 Reihen Kraus gestrickt auf normalen Nadeln.
|
||||
|
||||
Dann kommt noch eine, die recht gestrickt wird, aber diesmal drehen wir
|
||||
nach jeder 4. Masche die rechte Nadel um 360° nach oben hinweg einmal um
|
||||
sich selbst.
|
||||
|
||||
Der Teil mit den Rippen
|
||||
-----------------------
|
||||
|
||||
Danach Stricken wir nochmal zwei Reihen im Rippchenmuster - also immer
|
||||
zwei rechte und zwei linke Maschen abwechselnd.
|
||||
|
||||
Jetzt können wir das ganze auf das Nadelspiel übernehmen. Also weiter
|
||||
das Muster stricken und auf jede Nadel 16 Maschen übernehmen.
|
||||
|
||||
Beim Maschenschluss kann der Faden am Anfang etwas lose aussehen, aber
|
||||
nach ein paar Reihen gibt sich das.
|
||||
|
||||
Macht hier insgesamt: 2 Reihen noch auf der langen Nadel, 1 um das ganze
|
||||
auf das Nadelspiel zu übertragen und 22 weitere, damit wir ein schönes
|
||||
Bündchen haben(, aber ihr könnt natürlich auch mehr oder weniger machen,
|
||||
je nachdem, wie es euch so gefällt.
|
||||
|
||||
Das Mittelteil
|
||||
==============
|
||||
|
||||
Ich kucke durch die Röhre
|
||||
-------------------------
|
||||
|
||||
Danach habe ca. 90 Reihen glatt rechts gestrickt. Aber das kann bei euch
|
||||
natürlich leicht variieren, je nachdem, wie lang eure unterarme sind.
|
||||
Deswegen rate ich euch das Teil ab und zu anzuprobieren.
|
||||
|
||||
25% der Maschen müssen gehen
|
||||
----------------------------
|
||||
|
||||
Weil der Arm in der Nähe meiner Handgelenke natürlich schmaler ist, als
|
||||
meine Oberarme und ich trotzdem wollte, dass die Stulpen schön eng
|
||||
anliegen, habe ich mich entschieden, für das Bündchen am Ende ein paar
|
||||
Maschen loszuwerden.
|
||||
|
||||
4xxx3xxx2xxx1xxx \<== jedes Zeichen stellt eine Masche dar, so, wie man
|
||||
sie auch auf der Stricknadel sieht.
|
||||
|
||||
Ich habe dann in der ersten Reihe, die Masche Nr. 4 mit der davor
|
||||
“vereinigt” und das auf jeder Nadel.
|
||||
|
||||
Danach habe ich 2 Reihen normal gestrickt und darauf die Masche Nr. 3
|
||||
mit der davor verkürzt. Und so weiter, bis ich nur noch 12 statt der 16
|
||||
Maschen hatte.
|
||||
|
||||
Das Ende
|
||||
========
|
||||
|
||||
Teil 1
|
||||
------
|
||||
|
||||
Ganz einfach 2 rechts, 2 links, und das x Reihen, oder solange, bis ihr
|
||||
meint, dass es genug ist.
|
||||
|
||||
Sieht schön aus, ist aber total anstrengend
|
||||
-------------------------------------------
|
||||
|
||||
… deswegen habe ich mir eine dünnere Stricknadel und eine kleine
|
||||
Häckelnadel zu hilfe genommen. Hinter einer normal gestrickten Masche
|
||||
werden nun 2 neue aufgenommen. Da der Platz dabei ziemlich eng werden
|
||||
kann, habe ich den Faden mit der dünnen Stricknadel aufgenommen und mit
|
||||
der Häkelnadel verstrickt.
|
||||
|
||||
So haben wir am Ende 3mal so viele Maschen auf den Nadeln, wie vorher.
|
||||
Und dadurch wird später mal der schöne Rüschenbund enstehen.
|
||||
|
||||
Hui, im Kreis
|
||||
-------------
|
||||
|
||||
Nachdem das so schwierig war, wird es jetzt wieder leichter. noch ca. 13
|
||||
Reihen stricken und dann abketten. :)\
|
||||
Die Fäden noch schön vernähen und fertig sollte die Armstulpe sein.
|
||||
|
||||
… fehlt nur noch die zweite ;)
|
@ -0,0 +1,33 @@
|
||||
+++
|
||||
title = "Indizes statt Tabellen"
|
||||
date = "2009-05-05T21:04:00+00:00"
|
||||
author = "Gibheer"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Datenbanken sind dazu bestimmt Datenmengen zu verwalten, ändern,
|
||||
löschen und einfügen. Um zumindest das einfügen und
|
||||
aktualisieren zu vereinfachen gibt es Indizies.
|
||||
|
||||
Oracle hat mich heute jedoch sehr verblüfft. In einer Tabelle, auf
|
||||
der sehr viel mit like gearbeitet wird, wurden die Abfragen immer
|
||||
langsamer und die vorhanden Indizies konnten aufgrund des Likes nicht
|
||||
verwendet werden. Als jedoch aber jemand einen Index über alle 4
|
||||
Spalten gelegt hatte, wurde dieser Index allen anderen Indizes
|
||||
vorgezogen. Aber nicht nur, dass es den Index zum suchen verwendete,
|
||||
sondern es benutzte den Index als Tabelle. Alle Daten, die angezeigt
|
||||
werden sollten, wurden, laut Explain, direkt aus dem Index gezogen.
|
||||
|
||||
Nach einer Suche in den Oracledokumenten konnte dieses Verhalten erstmal
|
||||
nicht erklärt werden. Auf vielen Seiten wurde auch geschrieben,
|
||||
dass like nicht index-fähig sei.
|
||||
|
||||
Jetzt stellt sich natürlich die Frage, warum so etwas nicht auch
|
||||
auf einer normalen Tabelle funktioniert, denn eigentlich sollten
|
||||
schnelle Interaktionen, also Select, Update, Insert und Delete das Ziel
|
||||
einer Datenbank sein. Warum muss man dazu bei Oracle hier eine kopie der
|
||||
Tabelle anlegen lassen, um genau dieses Ziel zu erreichen?
|
||||
|
||||
Was ich noch nicht ausprobieren konnte, ob dieser Index den Planner so
|
||||
weit beeinflusst, dass auch Funktionsaufrufe direkt darauf gelenkt
|
||||
werden. Ich werd mal schauen, ob ich das eingehender testen kann.
|
@ -0,0 +1,83 @@
|
||||
+++
|
||||
title = "Lustige Gehversuche mit Gentoo/FreeBSD"
|
||||
date = "2009-09-11T11:11:00+00:00"
|
||||
author = "Stormwind"
|
||||
draft = false
|
||||
+++
|
||||
|
||||
Hallo ihr,
|
||||
==========
|
||||
|
||||
Ich habe mir etwas ganz verrücktes in den Kopf gesetzt, aufgrund der
|
||||
Tatsache, dass mein Notebook eine neue Festplatte braucht (die alte ist
|
||||
mir irgendwie zu klein geworden…).
|
||||
|
||||
Da ich mein Betriebssystem auf jeden Fall nochmal neu aufstetzen will,
|
||||
um ungeliebte Experimentieraltlasten loszuwerden, kann ich mit dem
|
||||
Experimentieren auch gleich wieder anfangen.
|
||||
|
||||
Also habe ich mir überlegt, was mein neues Betriebssystem alles können
|
||||
soll und habe mir überlegt, dass es ja lustig wäre als Dateisystem ZFS
|
||||
zu benutzen.
|
||||
|
||||
Da wäre die einfachste Lösung ja OpenSolaris zu benutzen. Jedoch bin ich
|
||||
leider total vernarrt in langdauerende Kompiliervorgänge, was mich auf
|
||||
die Spur von BSD gelenkt hat.
|
||||
|
||||
Und irgendeine FreeBSD-Version soll mittlerweile sogar von ZFS richtig
|
||||
booten können, so dass ich auch mit meinem Notebook, in dem ich ja nur
|
||||
eine Festplatte habe, die Vorteile von ZFS richtig nutzen könnte.
|
||||
|
||||
Auf der anderen Seite hab ich auch mein Gentoo richtig lieb gewonnen,
|
||||
und wollte es eigentlich nicht loswerden.
|
||||
|
||||
Deswegen habe ich mich dafür entschieden eine wirre Kombination aus
|
||||
allen Dreien zu versuchen.
|
||||
|
||||
Dabei soll mir vor allen Dingen das hier helfen: [Gentoo/FreeBSD
|
||||
Anleitung (englisch)](http://www.gentoo.org/doc/en/gentoo-freebsd.xml)
|
||||
|
||||
Wie in der Anleitung beschrieben, habe ich dann auch Versucht die
|
||||
FreeSBIE CD (2.0.1) mit meinem Notebook zu starten, über das externe
|
||||
CD-ROM Laufwerk, da mein Laptop kein eingebautes hat.
|
||||
|
||||
Leider stellte sich herraus, dass diese CD auf mit unerfindlichen
|
||||
Gründen inkompatibel mit meinem Laufwerk ist. (Wir haben beides getrennt
|
||||
voneinander getestet und sie funktionieren wunderbar.)
|
||||
|
||||
Nach mehreren Versuchen mit anderen FreeBSD LiveCDs bin ich bei
|
||||
[HeX](http://www.rawpacket.org/projects/hex/hex-livecd) stehen geblieben
|
||||
und der Erkenntnis, dass ich wohl meine tolle Idee mit dem ZFS noch ein
|
||||
wenig aufschieben muss, da ich unterwegs gelesen habe, dass FreeBSD ZFS
|
||||
zwar ab der Version 7.0 unterstützt, doch erst ab der Version 7.2 davon
|
||||
booten kann.
|
||||
|
||||
Das Problem daran ist nun, dass Gentoo/FreeBSD noch bei der Version 7.1
|
||||
ist…
|
||||
|
||||
Also habe ich meine Versuche mit ZFS erst einmal zur Seite gelegt, und
|
||||
mich daran gemacht das eigentliche Gentoo zu installieren.
|
||||
|
||||
Das Installieren der stage 3 Version funktionierte bei mir ohne
|
||||
Probleme. Allerdings, konnte ich meine Pakete nicht auf den neusten
|
||||
Stand bringen, weil ein paar der Updates nicht gelingen wollten.
|
||||
|
||||
Weiterhin kann ich leider von meinem Gentoo aus keine Netzwerkverbindung
|
||||
aufbauen, weil es kein Init-Script für die Netzwerkverbindung gibt. Ich
|
||||
nehme einfach mal an, dass er das Modul für die Netzwerkkarte in der
|
||||
Standartkonfiguration des Kernels nicht mitbaut, oder nicht geladen hat.
|
||||
|
||||
Vorläufiges Ergebnis:
|
||||
=====================
|
||||
|
||||
Alles in Allem ist mein Versuch aber bisher geglückt, und mit ein
|
||||
bisschen Schubsen kann das Gentoo/FreeBSD auch booten und ich habe eine
|
||||
Konsole mit der ich theoretisch arbeiten könnte.
|
||||
|
||||
Ich dürft also gespannt sein, ob ich das System soweit hinbekomme, dass
|
||||
ich Produktiv damit arbeiten kann. Ich habe auf jeden Fall bisher
|
||||
gelernt, dass ich noch viel über FreeBSD lernen muss, mit dem ich bisher
|
||||
noch fast gar nichts zu tun hatte, da es sich eben in den Punkten, die
|
||||
für mich interessant sind, doch sehr von Linux unterscheidet.
|
||||
|
||||
**Diese Serie wird Fortgesetzt…**
|