#!/bin/blog

October 3, 2017

“Commands that show your Wifi passwords” roundup

Filed under: Sicherheit, UNIX/Linux/BSD — Tags: , , , , , , — martin @ 9:51 pm

With a hint of sensationalism, @DynamicWebPaige asks:

Did you know that “netsh wlan show profile” shows every network your computer has ever connected to? And “key=clear” shows the *passwords*?

Screenshot_1.png

No, I didn’t, and to be frank, I don’t care. But I recently played with NetworkManager on Linux and saw my Wifi passwords in discrete files under /etc/NetworkManager/system-connections/.

So here’s how to show stored Wifi passwords on Windows, Linux and MacOS:

Windows

We’ve already seen that it’s quite straightforward, if you’re able to start a cmd shell as the system adminstrator.

First, the list of used SSIDs:

netsh wlan show profile

Second, the password for any given SSID:

netsh wlan show profile <ssid> key=clear

Linux

We can safely assume that anyone who configures their wpa-supplicant manually won’t be surprised that the passwords are stored in clear. So let’s move on to NetworkManager, which is what most Linux desktop users will use to connect to Wifi networks. NetworkManager stores one file each for every made network connection in the directory /etc/NetworkManager/system-connections/, so the simplest approach is to just grep for the passwords, in order to receive a comprehensive list:

sudo grep -H psk= /etc/NetworkManager/system-connections/*

macOS

MacOS (whatever way it’s supposed to be capitalized this time around) makes the task quite hard, because the saved networks are stored in a property list and the passwords need to be retrieved from the key ring one by one.

Here’s how to list the SSIDs of the saved networks:

defaults read \
 /Library/Preferences/SystemConfiguration/com.apple.airport.preferences |
 grep SSIDString
And here is how to read a single password from the key store:
security find-generic-password -w -a <ssid>

So here you go, have cross-platform fun. 🙂

Advertisements

September 21, 2017

IPv6 Privacy Stable Addressing Roundup

Filed under: Internet und Intranet, UNIX & Linux — Tags: , , — martin @ 5:15 pm

“Okay, let’s see whether we can reach your Macbook externally via IPv6. What’s the address?”

Sure, let’s have a look.

$ ifconfig
...
 inet6 2a03:2260:a:b:8aa:22bf:7190:ef36 prefixlen 64 autoconf secured 
 inet6 2a03:2260:a:b:b962:5127:c7ec:d2df prefixlen 64 autoconf temporary 
...

Everybody knows that one of these is a random IP address according to RFC 4941: Privacy Extensions for Stateless Address Autoconfiguration in IPv6 that changes once in a while so external observers (e.g. accessed web servers) can’t keep track of my hardware’s ethernet MAC address. This is the one we do NOT want if we want to access the Macbook from the internet. We want the stable one, the one that has the MAC address encoded within, the one with the ff:fe in the middle, as we all learned in IPv6 101.

It turns out, all of my devices that configure themselves via SLAAC, namely a Macbook, an iPhone, an iPad, a Linux laptop and a Windows 10 workstation, don’t have ff:fe addresses. Damn, I have SAGE status at he.net, I must figure out what’s going on here!

After a bit of research totally from scratch with the most adventurous search terms, it turns out that these ff:fe, or more professionally, EUI-64 addresses, have become a lot less common than 90% of IPv6 how-to documents and privacy sceptics want us to believe. On most platforms, they have been replaced by Cryptographically Generated Addresses (CGAs), as described in RFC 3972. The RFC is a close relative to RFC 3971, which describes a Secure Neighbor Discovery Protocol (SeND). Together, they describe a cryptographically secure, PKI-based method of IPv6 address generation. However, as of this writing, only a PKI-less stub implementation of RFC 3972 seems to have become commonplace.

Those CGAs, or as some platforms seem to call them, Privacy Stable Addresses, are generated once during the first startup of the system. The address itself, or the seed used to randomize it, may be (and usually is) persistently stored on the system, so the system will come up every time with this same IPv6 address instead of one following the well-known ff:fe pattern.

To stick with the excerpt from my macOS ifconfig output above, the address marked temporary is a Privacy Extension address (RFC 4941), while the one marked secure is the CGA (RFC 3972).

It’s surprisingly hard to come up with discussions on the web where those two types aren’t constantly confused, used synonymously, or treated like ones that both need to be exterminated, no matter the cost. This mailing list thread actually is one of the most useful resources on them.

This blog post is a decent analysis on the behaviour on macOS, although it’s advised to ignore the comments.

This one about the situation on Windows suffers from a bit of confusion, but is where I found a few helpful Windows commands.

The nicest resource about the situation on Linux is this german Ubuntuwiki entry, which, given a bit of creativity, may provide a few hints also to non-german speakers.

So, how to configure this?

  • macOS
    • The related sysctl is net.inet6.send.opmode.
    • Default is 1 (on).
    • Note how this is the only one that refers to SeND in its name.
  • Windows
    • netsh interface ipv6 set global randomizeidentifiers=enabled store=persistent
    • netsh interface ipv6 set global randomizeidentifiers=disabled store=persistent
    • Default seems to be enabled.
    • Use store=active and marvel at how Windows instantly(!) replaces the address.
  • Linux
    • It’s complicated.
    • NetworkManager defaults to using addr-gen-mode=stable-privacy in the [ipv6] section of /etc/NetworkManager/system-connections/<Connection>.
    • The kernel itself generates a CGA if addrgenmode for the interface is set to none and /proc/sys/net/ipv6/conf/<interface>/stable_secret gets written to.
    • NetworkManager and/or systemd-networkd take care of this. I have no actual idea.
    • In manual configuration, CGA can be configured by using ip link set addrgenmode none dev <interface> and writing the stable_secret in a pre-up action. (See the Ubuntu page linked above for an example.)
  • FreeBSD
    • FreeBSD has no support for CGAs, other than a user-space implementation through the package “send”, which I had no success configuring.

So far, I haven’t been able to tell where macOS, Windows and NetworkManager persistently store their seeds for CGA generation. But the next time someone goes looking for an ff:fe address, I’ll know why it can’t be found.

July 29, 2017

Debian /boot old kernel images

Filed under: Uncategorized, UNIX/Linux/BSD — Tags: , — martin @ 10:59 am

So I was looking at yet another failed apt-get upgrade because /boot was full.

After my initial whining on Twitter, I immediately received a hint towards /etc/apt/apt.conf.d/01autoremove-kernels, which gets generated from /etc/kernel/postinst.d/apt-auto-removal after the installation of new kernel images. The file contains a list of kernels that the package manager considers vital at this time. In theory, all kernels not covered by this list should be able to be autoremoved by running apt-get autoremove.

However it turns out that apt-get autoremove would not remove any kernels at all, at least not on this system. After a bit of peeking around on Stackexchange, it turns out that this still somewhat newish concept seems to be ridden by a few bugs, especially concerning kernels that are (Wrongfully? Rightfully? I just don’t know.) marked as manually-installed in the APT database: “Why doesn’t apt-get autoremove remove my old kernels?”

The solution, as suggested by an answer to the linked question, is to mark all kernel packages as autoinstalled before running apt-get autoremove:

apt-mark showmanual | 
 grep -E "^linux-([[:alpha:]]+-)+[[:digit:].]+-[^-]+(|-.+)$" | 
 xargs -n 1 apt-mark auto

I’m not an APT expert, but I’m posting this because the post-install hook that prevents the current kernel from being autoremoved makes the procedure appear “safe enough”. As always, reader discretion is advised. And there’s also the hope that it will get sorted out fully in the future.

July 7, 2017

How expiration dates in the shadow file really work

Filed under: Uncategorized, UNIX & Linux — Tags: , , , , — martin @ 6:24 pm

tl;dr: Accounts expire as soon as UTC reaches the expiration date.

In today‘s installment of my classic shame-inducing series “UNIX basics for UNIX professionals”, I want to talk about account (and password) expiration in /etc/shadow on Linux.

The expiration time is specified as days since january 1st, 1970. In the case of account expiration, the according value can be found in the second to last field in /etc/shadow.

Account expiration can be configured using the option „-E“ to the „chage“ tool. In this case, I want the user „games“, which I‘ll be using for demonstration purposes, to expire on the 31st of december, 2017:

# chage -E 2017-12-31 games

Using the „-l“ option, I can now list the expiration date of the user:

# chage -l games
[…]
Account expires : Dec 31, 2017
[…]

The first thing to be taken away here is that, as I can only use a number of days, I can not let a user expire at any given time of day. In /etc/shadow, I have now:

# getent shadow | awk -F: '/^games:/{print $8}'
17531

This of course can to be converted to a readable date:

# date --date='1970-01-01 00:00:00 UTC 17531 days'
Sun Dec 31 01:00:00 CET 2017

So, will the account still be usable on december 31st? Let‘s change it‘s expiration to today (the 7th of July, 2017) to see what happens:

# date
Fri Jul 7 12:58:32 CEST 2017
# chage -E today games
# chage -l games
[…]
Account expires : Jul 07, 2017
[…]
# su - games
Your account has expired; please contact your system administrator
[…]

I’m now only left with the question whether this expiration day is aligned on UTC or local time.

# getent shadow | awk -F: '/^games:/{print $8}'
17354
# date --date='1970-01-01 00:00:00 UTC 17354 days'
Fri Jul 7 02:00:00 CEST 2017

I‘ll stop my NTP daemon, manually set the date to 00:30 today and see if the games user has already expired:

# date --set 00:30:00
Fri Jul 7 00:30:00 CEST 2017
# su - games
This account is currently not available.

This is the output from /usr/sbin/nologin, meaning that the account is not expired yet, so I know for sure that the expiration date is not according to local time but to UTC.

Let‘s move closer to our expected threshold:

# date --set 01:30:00
Fri Jul 7 01:30:00 CEST 2017
# su - games
This account is currently not available.

Still not expired. And after 02:00:

# date --set 02:30:00
Fri Jul 7 02:30:00 CEST 2017
# su - games
Your account has expired; please contact your system administrator

So, in order to tell from a script whether an account has expired, I simply need to get the number of days since 1970-01-01. If this number is greater or equal to the value in /etc/shadow, the user has expired.

DAYSSINCE=$(( $(date +%s) / 86400 )) # This is days till now as per UTC.
EXPIREDAY=$(getent shadow | awk -F: '/^games:/{print $8}')
if [[ $DAYSSINCE -ge $EXPIREDAY ]] # Greater or equal
then
    EXPIRED=true
fi

One last thought: We’ve looked at a time zone with a small offset from UTC. What about timezones with larger offsets, in the other direction?

  • If we move the timezone to the east, further into the positive from UTC, it will behave the same as here in CEST and the account will expire sometime during the specified day, when UTC hits the same date.
  • If we move the timezone far to the west, like e.g. PST, and an absolute date is given to “chage -E“, the account will probably expire early, the day before scheduled expiration. I was not able to find anything useful on the web and even my oldest UNIX books from the 1990s mention password expiration only casually, without any detail. Active use of password expiration based on /etc/shadow seems to be uncommon. The code that seems to do the checking is here and it does not appear to care about time zones at all.
  • Any comments that clarify the behaviour in negative offsets from UTC will be appreciated.

January 5, 2016

SSH firewall bypass roundup

Filed under: UNIX & Linux — Tags: — martin @ 8:35 pm

So my SSH workflow has reached a turning point, where I’m going to clean up my ~/.ssh/config. Some entries had been used to leverage corporate firewall and proxy setups for accessing external SSH servers from internal networks. These are being archived here for the inevitable future reference.

I never use “trivial” chained SSH commands, but always want to bring up a ProxyCommand, so I have a transparent SSH session for full port, X11, dynamic and agent forwarding support.

ProxyCommand lines have been broken up for readability, but I don’t think this is supported in ~/.ssh/config and they will need to be joined again to work.

Scenario 1: The client has access to a server in a DMZ

The client has access to a server in an internet DMZ, which in turn can access the external server on the internet. Most Linux servers nowadays have Netcat installed, so this fairly trivial constellation works 95.4% of the time.

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand ssh host.dmz /usr/bin/nc -w 60 host.external 22

Scenario 2: As scenario 1, but the server in the DMZ doesn’t have Netcat

It may not have Netcat, but it surely has an ssh client, which we use to run an instance of sshd in inetd mode on the destination server. This will be our ProxyCommand.

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand ssh -A host.dmz ssh host.external /usr/sbin/sshd -i

Scenario 2½: Modern version of the Netcat scenario (Update)

Since OpenSSH 5.4, the ssh client has it’s own way of reproducing the Netcat behavior from scenario 1:

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand ssh -W host.external:22 host.dmz

Scenario 3: The client has access to a proxy server

The client has access to a proxy server, through which it will connect to an external SSH service running on Port 443 (because no proxy will usually allow connecting to port 22).

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand /usr/local/bin/corkscrew 
   proxy.server 3128 
   host.external 443 
   ~/.corkscrew/authfile
# ~/.corkscrew/authfile
username:password

(Omit the authfile part, if the proxy does not require authentication.)

Scenario 4: The client has access to a very restrictive proxy server

This proxy server has authentication, knows it all, intercepts SSL sessions and checks for a minimum client version.

# ~/.ssh/config
Host host.external
ServerAliveInterval 10
ProxyCommand /usr/local/bin/proxytunnel 
   -p proxy.server:3128 
   -F ~/.proxytunnel.auth 
   -r host.external:80 
   -d 127.0.0.1:22 
   -H "User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0\nContent-Length: 0\nPragma: no-cache"
# ~/.proxytunnel.auth
proxy_user=username
proxy_passwd=password

What happens here:

  1. host.external has an apache web server running with forward proxying enabled.
  2. proxytunnel connects to the proxy specified with -r, via the corporate proxy specified with -p and uses it to connect to 127.0.0.1:22, on the forward-proxying apache.
  3. It sends a hand-crafted request header to the intrusive proxy, which mimics the expected client version.
  4. Mind you that although the connection is to a non-SSL service, it still is secure, because encryption is being brought in by SSH.
  5. What we have here is a hand-crafted exploit against the know-it-all proxy’s configuration. Your mileage may vary.

Super sensible discretion regarding the security of your internal network is advised. Don’t fuck up, don’t use this to bring in anything that will spoil the fun. Bypass all teh firewalls responsibly.

October 25, 2014

CentOS 7 on MD-RAID 1

Filed under: UNIX & Linux — Tags: , , , — martin @ 2:47 pm

Figuring this out took me quite a bit of time. In the end, I approached the starter of this hilariously useless CentOS mailing list thread, who assured me that indeed he had found a way to configure MD-RAID in the installer, and behold, here’s how to install CentOS 7 with glorious old-school software RAID.

In the “Installation Destination” screen, select the drives you want to install onto and “I will configure partitioning”. Then click “Done”:
20141025134323In the “Manual Partitioning” screen, let CentOS create the partitions automatically, or create your own partitioning layout. I will let CentOS create them automatically for this test. 20141025134926Apparently due to restrictions in the Installer, /boot is required, but can’t be on a logical volume, so it appears as primary partition /dev/sda1. The root and swap volumes are in a volume group named centos.
The centos volume group will need to be converted to RAID 1 first. Select the root volume and find the “Modify…” button next to the Volume Group selection drop-down. A window will open. In this window, make sure both drives are selected and select “RAID 1 (Redundancy)” from the “RAID Level” drop-down. Repeat this for all volumes in the centos volume group.  If you are using the automatic partition layout, note at this point, how, after this step, the file system sizes have been reduced to half their size.

20141025135637As the final step, select the /boot entry and use the “Device Type” drop-down to convert /boot to a “RAID” partition. A new menu will appear, with “RAID 1 (Redundancy)” pre-selected. The sda1 subscript below the /boot file system will change into the “boot” label once you click anywhere else in the list of file systems.
20141025140445Click “Done”, review the “Summary of Changes”, which should immediately make sense if you have ever configured MD-RAID, and the system will be ready for installation.

October 21, 2014

Overriding the Mozilla Thunderbird HELO hostname

Filed under: Internet, Paranoia — Tags: , , , — martin @ 5:23 pm

I found that when connecting through a SOCKS proxy (e.g. SSH dynamic forward), Mozilla Thunderbird tends to leak its local hostname (including the domain of the place where you are at that moment) as a HELO/EHLO header to its SMTP submission server, who then writes it into the first Received-Header.

To avoid this, use about:config and create the following configuration key and value:

mail.smtpserver.default.hello_argument = some-pc

Or whatever hostname you prefer.

Reference: Mozillazine – Replace IP address with name in headers

October 17, 2014

What does the slash in crontab(5) actually do?

Filed under: UNIX & Linux — Tags: , , , — martin @ 2:16 pm

That’s a bit of a stupid question. Of course you know what the slash in crontab(5) does, everyone knows what it does.

I sure know what it does, because I’ve been a UNIX and Linux guy for almost 20 years.

Unfortunately, I actually didn’t until recently.

The manpage for crontab(5) says the following:

20141017150008

It’s clear to absolutely every reader that */5 * * * * in crontab means, run every 5 minutes. And this is the same for every proper divisor of 60, which there actually are a lot of: 2, 3, 4, 5, 6, 10, 12, 15, 20, 30

However, */13 * * * * does not mean that the job will be run every 13 minutes. It means that within the range *, which implicitly means 0-59, the job will run every 13th minute: 0, 13, 26, 39, 52. Between the :52 and the :00 run will be only 8 minutes.

Up to here, things look like a simple modulo operation: if minute mod interval equals zero, run the job.

Now, let’s look at 9-59/10 * * * *. The range starts at 9, but unfortunately, our naive modulo calculation based on wall clock time fails. Just as described in the manpage, the job will run every 10th minute within the range. For the first time at :09, after which it will run at :19 and subsequently at :29, :39, :49 and :59 and then :09 again.

Let’s look at a job that is supposed to run every second day at 06:00 in the morning: 0 6 */2 * *. The implied range in */2 is 1-31, so the job will run on all odd days, which means that it will run on the 31st, directly followed by the 1st of the following month. The transitions from April, June, September and November to the following months will work as expected, while after all other months (February only in leap years), the run on the last day of the month will be directly followed by one on the next day.

The same applies for scheduled execution on every second weekday at 06:00: 0 6 * * */2. This will lead to execution on Sunday, Tuesday, Thursday, Saturday and then immediately Sunday again.

So, this is what the slash does: It runs the job every n steps within the range, which may be one of the default ranges 0-59, 0-23, 1-31, 1-11 or 0-7, but does not carry the remaining steps of the interval into the next pass of the range. The “every n steps” rule works well with minutes and hours, because they have many divisors, but will not work as expected in most cases that involve day-of-month or day-of-week schedules.

But we all knew this already, didn’t we?

August 6, 2014

Blu-ray am Mac abspielen

Filed under: Hardware, Movies — Tags: , , — martin @ 9:06 pm

20140806002548Seit ein paar Tagen bin ich stolzer Besitzer eines Apple Macbook Pro (Late 2013). Das ist mein erstes Notebook mit USB 3.0. Und obwohl ich während der letzten Jahre bereits ausschließlich Peripherie mit USB 3.0 angeschafft hatte, fehlte mir noch ein entsprechender optischer Brenner. Also habe ich mal zum schwindligsten OEM-Mist gegriffen, den Amazon in dieser Hinsicht zu bieten hat. Der kann Blu-ray, also stellte sich die Frage, wie man am besten Blu-ray-Disks am Mac ansehen kann. Auf Reisen hat der Datenträger in der Tasche schließlich durchaus Vorteile gegenüber iTunes- oder Amazon-Streams über unvorherhersehbar leistungsschwache Wireless-LANs oder gar Mobilfunk.

20140806003314Am naheliegendsten war natürlich der Versuch, die Blu-ray-Disks mit VLC abzuspielen. Leider war dieses Unternehmen nicht von Erfolg gekrönt. Zwar findet man per Suchmaschine schnell die AACS-Bibliothek zum Einbinden in VLC, aber das ebenfalls auffindbare Keyfile ist äußerst rudimentär ausgestattet und war zu diesem Zeitpunkt trotz des großen “UPDATED!”-Hinweis bereits einige Jahre alt. Mit Glück kann man sich den passenden Key zur vorliegenden Blu-ray-Disk aus einem einschlägigen Forum heraussuchen, aber man hat nicht immer Glück, und so war kein Key für die Disks auffindbar, die ich in dem Moment bei mir hatte. Für den Moment scheint die Content-Industrie gewonnen zu haben, so dass Blu-ray-Disks im Open-Source-Workflow wirklich garnicht oder nur mit reichlich Handarbeit nutzbar sind.

Also blieb nur noch die Zuflucht zu kommerziellen Playern, oder besser gesagt, DEM kommerziellen Player. Denn die gesamte Konkurrenz des “MacGo Mac Blu-ray Player” scheint von genau diesem abzustammen. Die Homepage von MacGo weist eine brachiale Übersetzung ins Deutsche auf (“Wir dedizierten uns, der Führer in der DVD Blu-ray Video Konvertierungstechnologie zu sein!”), die bei einigermaßen professionalisierten Nigeria-Scammern Fremdscham hervorrufen dürfte. Bei dieser abenteuerlichen Webseite habe ich nun also diese nicht weniger abenteuerliche Software gekauft.

Zugegebenermaßen ist es so, dass ich auch im DVD-Zeitalter noch nie Wert auf Menüs und Extras gelegt habe, und so komme ich mit der rudimentären Navigation des MacGo-Players gut zurecht. Bei Disks mit vielen Episoden, die nicht vernünftig ausgewählt werden können, wäre mir aber verständlich, wenn sich eine gewisse Unzufriedenheit darüber breitmacht, dass das Originalmenü der Disk nicht angezeigt wird.

Ganz ohne Internetverbindung geht es allerdings auch hier nicht, denn die Entschlüsselung der Disks wird nach dem Einlegen über das Netz etabliert. Das sollte allerdings auch auf schlechten Verbindungen oder per Roaming kein Problem sein.

201408060026452014080600302620140806204226

Der windige Player hat die getesteten Disks absolut problemlos abgespielt, mit minimaler Prozessorlast. Optisch ist sein User Interface aber wirklich kein Highlight, nicht zuletzt, weil es nicht auf das Retina-Display des Macbook Pro ausgelegt ist. Die 5 Minuten Angst beim Kauf der Software haben sich aus meiner Sicht trotzdem gelohnt.

Dass Apple mit seinem “Superdrive” keine Unterstützung für Blu-ray bietet, ist und bleibt traurig. Auch wenn Steve Jobs mit seiner Beschreibung von Blu-ray als “Big bag of hurt” recht gehabt haben sollte, wäre es schön, wenn man ein externes Laufwerk mit stabiler Stromversorgung direkt bei Apple kaufen könnte. Eine allgegenwärtige schnelle Internetversorgung, über die man sich jederzeit mit Filmen bedienen könnte, wird nämlich noch über Jahre hinweg Zukunftsmusik bleiben. Das Video-Regal im nächsten Supermarkt liegt dann einfach näher als der iTunes-Store.

April 26, 2014

Microsoft und Oracle könnten wir für sowas wie Heartbleed in Regress nehmen

Filed under: Open Source — Tags: , , , , , — martin @ 3:36 pm

heartbleedEin Satz, gesprochen von einem Kunden, der es besser wissen müsste, als das Heartbleed-Debakel bereits seinen Höhepunkt überschritten hatte:

“Microsoft und Oracle könnten wir für sowas wie Heartbleed in Regress nehmen.”

Obwohl der Ausspruch sich für jeden mit mehr als 4 Wochen Erfahrung in der EDV schon von selbst ins richtige Licht setzt, habe ich heute Zeit, etwas darüber zu schreiben.

Warum kann man ein Open-Source-Projekt wie OpenSSL nicht haftbar machen für schwerwiegende Fehler, die für erheblichen personellen Aufwand und Vertrauensverlust bei den Kunden sorgen?

Na, das weiß doch jeder. Das ist ganz einfach nachzulesen in der Lizenz, wie bei jedem Open-Source-Projekt. OpenSSL steht unter der OpenSSL-Lizenz. Dort heißt es:

openssl

Das wäre also erwartungsgemäß geklärt. Jede Gewährleistung wird ausgeschlossen. Ihr kennt das. So ist das eben bei Frickelsoftware von unbezahlten Bastlern aus dem Hobbykeller.

Mit vernünftiger kommerzieller Software müsste man sich sowas nicht antun. Oder vielleicht doch?

RedHat

Bevor wir zu Microsoft und Oracle gehen, schauen wir mal nach dem Linux-Distributor, über den mein Kunde sein OpenSSL-Paket installiert hatte. RedHat hat ein “Enterprise Agreement” oder zu deutsch “Geschäftskundenvertrag”:

redhat

In Punkt 8.1 wird dort die Haftungsobergrenze zunächst auf 45000 Euro festgelegt. In den folgenden Punkten wird dann festgelegt, dass RedHat außer bei Vorsatz für keinerlei Schäden haften wird. Gefolgt wird das ganze vom gezeigten Punkt 10.2, in dem jegliche Gewährleistung ausdrücklich ausgeschlossen wird.

Microsoft

Weiter zu Microsoft. Hier habe ich mir die Lizenbestimmungen für Windows 2012 Server in der “Datacenter Edition” angeschaut:

microsoft

Ähnlich wie Redhat setzt Microsoft in Punkt 23 eine Haftungsobergrenze. Diese erstreckt sich maximal auf den Betrag, der für die Software bezahlt wurde. Eine weitere Haftung für Schäden jeder Art wird ausgeschlossen. In den folgenden Garantiebestimmungen wird zugesichert, dass die Software im wesentlichen wie beschrieben funktionieren und bei Bedarf kostenlos nachgebessert werden wird. Für den Fall, dass sie nicht nachgebessert werden kann, wird die Rückgabe des Kaufpreises zugesagt, selbstverständlich nicht ohne die Bedingung, dass die Software in diesem Fall deinstalliert werden muss. “Dies sind Ihre einzigen Ansprüche.” Es gibt also de-facto überhaupt keine Ansprüche.

Oracle

Für Java gilt das “Oracle Binary Code License Agreement for the Java SE Platform Products and JavaFX”:

oraclejava

Oracle gibt in Punkt 4, ähnlich wie das die meisten Open-Source-Lizenzen tun, überhaupt keine Garantie für Java und setzt in Punkt 5 pro forma eine Haftungsobergrenze von 1000 US-Dollar und schließt vor allem jegliche Haftung aus.

Egal, wieviel gutes man über Oracle sagen will oder nicht, zumindest sind sie hier so nah an den Open-Source-Lizenzen dran, dass ihre Formulierungen klar auf den Punkt kommen.

Fazit

Wir sehen, dass nicht nur lächerlich geringe Haftungshöchstgrenzen zum Geschäft gehören, sondern dass selbstverständlich jede Gewährleistung grundsätzlich ausgeschlossen wird. Das hier ist kein Märchen von Fanatikern aus der Open-Source-Szene, sondern die simple Wahrheit, die jeder einfach selbst nachlesen kann.

Older Posts »

Create a free website or blog at WordPress.com.