Tue, 14 Dec 2004

I've taken a couple of days this week to work on nfsim again. Went onto #netfilter to find testers, which gave me enough to keep going to a couple of days fixing what were basically packaging and usability issues: you needed to set LD_LIBRARY_PATH, you needed to load modules manually for iptables to work, you needed to have iptables in /sbin.

But my latest improvements are absolutely wonderful. Firstly, I used the power of talloc to detect and report memory leaks in the code. All the allocations done by "kernel" code (skb_alloc, kmem_cache_alloc, kmalloc and vmalloc) go through talloc, attached to different contexts. At the end, I call all the modules' exit routines, then check talloc_total_blocks() == 1 for each one. Not only detects leaks, but says where they came from.

But to really get gcov coverage up, you need to test allocation failures and the like. This is actually hard: firstly our test scripts assume that allocations always succeed. Otherwise they'd be a nightmare. Secondly, the allocation points move around in the code, so there's no good way to specify a particular allocation to fail.

To solve the first, I split failures into "script_fail()" which meant the scripted test failed (failed command, failed expectation) vs. "barf()" which meant some internal problem. And when testing allocation failures, I would ignore any call to "script_fail()".

For the second problem, I came up with the idea of using the snapshotting Jeremy and I had been discussing, to implement an automatic exhaustive failure test: first pass the allocation, then roll back and try failing it. But it implementing snapshotting and rollback while keeping all the state of allocation failures and successes already done looked like a nightmare.

Then Jeremy said "why not just fork() the simulator". And voila! We have exhaustive failure path testing.


[/tech] permanent link

Tue, 07 Dec 2004

Was curious to see what coverage we get with the (admittedly very spotty) nfsim-based netfilter test suite. The answer, thanks to gcov, is 38%. Which sounds OK, until you realize that merely running nfsim and shutting it down again gets a good proportion of that, due to initialization and cleanup code. Still, it gives a way to measure improvement in the testsuite from now on.
[/tech] permanent link

Sun, 05 Dec 2004

One centerpiece argument for banning circumvention devices was that, otherwise, copyright holders won't put their material online (I think there's a strong case to be made for statutory licensing for online distribution, but that's another issue). So we passed the Digital Agenda modifications to our Copyright Act in 1999. Now we're getting harsher laws under the Australia - US Free Trade agreement.

Meanwhile, Canada have taken a much more skeptical view of strengthening copyright monopolies, have resisted passing laws banning circumvention devices, and have not signed the WIPO "Copyright" Treaties. And they just got an iTunes store.


[/IP] permanent link

Thu, 02 Dec 2004

Started cleaning up my patches into series using "quilt", which is a mediocre utility for managing a series of patches. Nonetheless, digging around in the kernel again has resulted in several cleanup patches already, and more to come. I revised and updated a series of netfilter patches, particularly NAT work, which I hope to get through netfilter-devel and into the -mm tree for 2.6.11 or so.

I broke some things in the process, but the code got simpler. Hopefully people in advanced configurations will figure out how to fix it when their setups change (implicit source NAT is not done on locally generated packets). Hmm, perhaps I should detect that and issue a warning.


[/tech] permanent link

That would explain our loss!

We have a netball team, which we named the "Pathetic Losers" after "Whining Pathetic Losers" wouldn't fit on the form. People thought that was causing our losses, so we changed our name to the "Blue Penguins" since we wear blue, and are kinda chubby. We still don't win much.

Now I may have discovered the cause: Mikey thinks we were playing basketball..


[/self] permanent link

Thu, 18 Nov 2004

Software and Patent Protection

I attended IPRIA's "Innovation and the Patent System: Maximising Australia's Economic Development" on Friday. The consensus by the speakers (particularly expressed by Professor Josh Lerner of the Harvard Business School) was that there was an optimal level of patent protection for a country: enough to provide incentive to innovate, but not enough to see net loss due to suppression of other innovation. This seems perfectly reasonable. However, he stated that he didn't see the difference between software and other fields of endeavour. Similarly, the head of IP Australia, Dr Ian Heath, considered all Richard Stallman's arguments against software patents to be general arguments against patents, and seems to discount them on those grounds. In other words, Dr Heath thought that software does not warrant any special treatment. Clearly, this case has to be made before those in Australian patent circles would consider a reversion to the pre-1990 days where software was considered non-patentable.

Before I begin, I want to make an important point. I am not a lawyer. A lawyer can castrate a man without taking away his right to procreate, and see no problem. Non-lawyers do not see the distinction, or if they do, consider it absurd and weaselly. I am not a lawyer.

Dr Heath, a reasonable and deep-thinking gentleman, is a lawyer. He assured me that algorithms and theories are not patentable, only their application to reach technical outcomes, and indeed, this is true. Consider a simple invention: the steam engine. Theories explain how water will boil, and the volume of steam will be greater than the water, and the pressure can be used to drive a turbine. That's patentable (assuming it were novel), but the underlying physics theory is not: you have to apply them in a real way.

In the software world, we create algorithms to solve a problem, such as drawing a curve on the screen: since 1990 that is patentable because the algorithm is applied by using it to solve that problem. But software is just a new expression of the algorithm; you put it in code, instead of writing it down or keeping it in your head. No one else can get any use out of the algorithm because only solves that problem, and that use is patented.

That a software patent only covers the use of the algorithm to obtain the result (rather than the algorithm itself) is a distinction which I shall ignore as having no practical effect, and consider myself more rational for it.

A Simple Model of the Patent System

Let us begin with axioms that the patent system does two good things and one bad thing, by removing rights and placing them in the hands of the inventor. The good things are:

  1. Encourage inventors to develop their invention, and
  2. Encourage future development and invention by disclosure of inventions

The bad:

  1. Discourage others from developing the invention

As an aside, I use the term "development" here for want of better terminology: invention is getting to the idea, and development is getting to all the way to the marketplace. A policy which increases the amount of invention and doesn't alter the amount of development is useless: it is the end-to-end development which is the desirable outcome.

Both positive and negative effects are real and measurable both in general, and in specific cases. Excellent examples of both sides of patent practice were given during the day.

Development in Software vs. Other Industries

Why does development need encouragement? Because the process of development typically takes significant resources before any income from customers is received: laboratories, staff, equipment, etc. This is the first point at which software and other areas of technology differ: the equipment required to do leading-edge innovation is found in the majority of homes, and costs significantly less than $1000. Economists refer to this as "low barrier to entry", and usually consider it to be a good thing.

But to stop there is to ignore our definition of "distribution": getting the inventions into the hands of consumers. This is where software really differentiates itself from traditionally-patented goods: the distribution can be so cheap as to be effectively zero cost to the producer. To drive this home, consider Open Source development, in particular my own development of the "netfilter" packet filtering software. The price I paid to have around 10 million people obtain and use my software was approximately $100,000 dollars (12 months of development work plus equipment costs, kindly paid for by one user who wanted to use it). It's so low, in fact, that the cost of tracking and metering it would far outweigh the total cost to me.

Massive Existing Incentives

So the case for encouragement is less that it might be for other industries, but even if the capital cost and distribution costs are amazingly low, clearly the cost of skilled labour is an issue. Fortunately, there is already an incentive system in place for software: "infinite duration limited-release copyright".

infinite duration
the term is so long that all software which runs on hardware existing today is still covered by copyright. Again, as a non-lawyer, the effect is what matters.
limited-release
Software is produced by the authors as source code, which is readable and changeable by humans. This is almost always mangled into binary code which contains only the parts of the software required for operation on a particular class of computers. This binary code is released and protected by copyright, but we software authors get to partially have our cake and eat it too: we retain a monopoly on support by keeping the only version of the code which is easy to fix (the source code). We get both trade secret and copyright, for exactly the same product.
copyright
The duplication right, removed from owners of copies of the work and placed solely in the hands of the programmer (or their employer). This occurs without registration or the programmer asking for it, and usually requires a contract to transfer, but can be licensed in a multitude of ways.

It is this copyright incentive system which has fuelled the software industry; although software has generally been covered by patents in the last ten to fifteen years, their use in the marketplace has been limited so far. To those not familiar with the industry, four points illustrate the strength of this incentive system:

  1. William Gates III, Chairman of Microsoft Corporation, is widely regarded as the world's richest man, entirely from his company's software copyright revenue.
  2. There are an estimated 10 million software developers in the world. As previously mentioned, skilled labour is the main requirement for software development: each can be considered as having the equivalent of unfettered access to a fully-furnished, state-of-the-art laboratory in other fields. This dwarfs the research and development resources of other fields such as biotechnology by several orders of magnitude.
  3. It's an oft-quoted statistic that 90% of software developers do not work on "shrink-wrapped" software -- software produced and sold as a commodity, in boxes, where copyright protection provides the rewards incentive. Put another way, about 90% of software development requires no external incentive, but is driven dirctly by the desire to use the software. In-house or bespoke software development is the most common form, where development of software is done for a single specialised application and the copyright is never separated from the single copy of the code which is used. Less common is software developed to accompany a larger, non-copyrighted effort, such as computer hardware.
  4. Free Software and Open Source projects have emerged from those who feel that the use of normal copyright licensing in software is too detrimental to their particular field. In other words, they release software with much looser constraints than copyright would allow, forgoing the copyright incentive; ie. they consider the copyright incentive in software already too strong. This is often a variant on the above theme, where the results of in-house development are released as Open Source because of the minimal cost of doing so, or the potential for external improvements. Such efforts have seen success in a limited but significant number of software areas.

Another Measure of Development Rate

There are plenty of cases (in all fields) where two or more inventors invent the same thing, but only one receives the patent. This is an expected outcome of any "winner takes all" reward system, and implies that the incentive is driving development.

However, in software there are a significant number of cases where independent invention occurs where at least one inventor is not motivated by the patent reward, which implies that the patent incentive is not driving development. Frequently, the violating author was completely unaware of the previous work, having independently invented it (the British Telecom Hyperlink patent, the JPEG patent, the rproxy patents).

This is an inevitable result of the massive number of researchers in software driven by other motives, and is particularly where software developers feel aggrieved against the "patent minefield" in a way that other industry's developers do not. Patents are supposed to protect the inventor against "free riders" who take the final result of the inventors' years of labour without giving due compensation. However, independent invention is so common in software that the result of the patent system is that patent holders are the "free riders" taking the result of others labour withour providing anything in return.

Benefits of Patents Covering Software

With this background, can we say that the first benefit of patents, encouraging development, is working in the software field? In my experience, the only effect on developers seeking patents is that they file more patent applications since software patents became possible. The rewards from copyright and use of software represent much more certain, tangible benefits, and continue to drive development. Patents, when filed, are an afterthought, based solely on work which would have been done anyway.

The benefit of disclosure is similarly muted in software: the common cases of unpatented independent invention indicates both that most discoveries would have been made anyway, and that most people aren't building on the disclosures of previous patented work.

Harms of Patents Covering Software

If the benefits of patent coverage are diminished or non-existent for software, what of the harm? As mentioned previously, although patent coverage for software was legitimised in most countries around ten to fifteen years ago, inertia and other factors have allowed the majority of software developers to ignore them until recently. Their main use was for cross-licensing and revenue between larger companies, not against the majority of small businesses.

However, as in other areas, patent licensing in software has become more aggressive; as an example, Microsoft recently began extracting revenues for their growing bank of software patents. More worrying has been the use of patents and claims of potential patent liability to steer companies away from use of competing (often Open Source) technologies, such as suggested by the leaked memo from Hewlett-Packard[1].

The response to this escalation has been to divert more resources (mainly programming labour) away from development of software to patent issues, searches and establishing validity of claims. It has stopped several economically-important projects and standards entirely (rproxy, Sender-ID).

In addition, it has introduced a risk to releasing a piece of independently-authored software where there previously was none: a software author won't unwittingly violate copyright, and so can be fairly confident in releasing software. Such "risk-free" releases were the starting ground for most software companies. A continued escalation of patent licencing will require a legal consultation prior to release, to ensure no patent violation, creating a barrier to entry many times higher than the own-labour requirement currently existing, and hence reducing innovation significantly.

Open Source Software

Open Source software is software released under more competitive licensing terms than traditional shrink-wrapped "proprietary" software. The incentives in this case are usually driven by the authors' use of the software itself (ie. "We needed to write it anyway"). The introduction of patent coverage offers no incentive to these authors: the monopoly doesn't make the software more useful. In addition, the full-disclosure nature of Open Source software is already far superior at spreading software development techniques than disclosure by patent, which is encumbered by disclosure delays and legalese.

If patents had driven increased research and disclosure within the (non-Open Source) software field, this would be a benefit to Open Source software in the long term, but as argued previously, we have not seen any increase.

The effect of even a "penny-a-copy" patent license is a vast barrier to Open Source software development. Firstly, it is a truism that all non-trivial software unintentionally infringes multiple patents, often hundreds. Secondly because as pointed out previously, the cost of tracking and metering the software would far outweigh the total current cost of the system; both to the main developers who have to identify infringement, and to all the downstream developers and recipients who can no longer simply download, copy and modify.

Open Source developers feel particularly distressed by the entry of patents into their domain. Firstly because they rely on the maxima of creation and distribution efficiencies which software affords, whereas most proprietary software developers do not. Secondly, because their work is open to all, someone seeking to enforce their patent portfolio can search more effectively than proprietary software. Thirdly because, as noted earlier, no disclosures from the patent system are being used in these cases: patent restrictions and licence fees are not payment for something received from the patent holder.

And finally because they see their output as a public good, as summarised by Eben Moglen, Professor of Law and Legal History at Columbia University in the United States:

Free software, of which the operating system kernel called Linux is one very important example among thousands, free software is the single greatest technical reference library on Planet Earth, as of now.

In conclusion, it has been clear to me as an experienced software practitioner that the patentability of software has brought no improvement to the industry. The power of the additional (patent) incentive is far less than for the traditional industries that do not also enjoy copyright protection. The costs of the patent system is magnified in the software field because of the extreme efficiencies available, orders of magnitude above physical industries, and hence the disproportionate height of legal barriers patents introduce. And in both cases, Free and Open Source software represents the worst case: unrewarded by incentives, and unable to bear the costs.

I believe that IP Australia was correct in refusing to grant software patents until 1990. I believe that we failed to realize the consequences of the court decision which overturned this. These consequences have now become clearer, and the time has come for wider policy debate and examination.


[1] http://news.zdnet.com/2100-3513_22-5276901.html: ZDNet: HP memo: Microsoft planned open-source patent fight, July 20 2004.


[/IP] permanent link

Thu, 14 Oct 2004

The trouble with software patents

Groklaw is running a piece on software patents, from someone who has a software patent and thinks we risk "throwing out the baby with the bathwater" if software patents are not available.

The question is not "do I have to right to get a patent on my idea?" but "is progress better served by granting software patents or not?". There is one school of thought that says if US patent system flaws were fixed, software patents would be OK. Very few people with experience doubt that the serious flaws in the US patent system could be fixed, and note the flaws in other countries heading in the same direction.

For proof of this, it was recently estimated that Linux violates 283 patents. Linux is mainly a reimplementation of Unix, a system which predated software patents by a long way: these patents are therefore "incidental" violations committed in the normal process of implementing software. This does not bode well for production of software: you will violate patents, you won't know it, and your work is no longer yours to control.

But let us assume, for a moment, that the levels of obviousness and prior art are dramatically increased, and all these patents are swept away. I can't see how such a thing is possible, but that's another debate.

We are still left with the non-obvious, novel patents, such as in the RProxy patent woes. Do we need patents as an incentive for people to make progress in software? How much damage are patents doing to software progress?

For the first question, the flourishing of software prior to its patentability shows this clearly, such as the case of Dan Bricklan's Visicalc (the first spreadsheet, not patented, now a market dominated by Microsoft's Excel). This is because, unlike drugs or mechanical devices, software is already protected by copyright: the extension of patents to cover software made it the only area covered by both. Indeed, copyright on software is far more powerful than on a book, because the copyright holder can distribute only the binaries, not their actual creation, leveraging into a monopoly on support and fixes. With an estimated 10 million programmers in the world, a lack of incentives is very hard to argue: this is a result of the barriers to entry being so low. This low barrier also makes things like Open Source development possible, and the Internet. It is hard to argue that we would have been better off without these developments.

On the damage side of the equation, these same low barriers to entry make the barriers produced by patents disproportionately destructive. If it costs millions to produce a drug, patent and lawyers fees don't make such a difference. It is the normal low barriers of software which make ubiquitous infrastructure possible. Once again, the rproxy problems illustrate the loss we face due to patents in this area.


[/IP] permanent link

Fri, 08 Oct 2004

Reverse Engineering, aka. Science

I've been thinking further on why banning "reverse engineering" is so odious to those in the engineering field. "Reverse engineering" means to figure out how something works by examining it. You see how it behaves, you peer in the cracks, and then you pull it apart and see how the pieces fit together, and what they do. This technique is fundamental to science: explore the world around you. Naturally, you need tools to investigate these things: in the physical world, microscopes and tweezers. For software, it's debuggers, packet sniffers and disassemblers.

Banning "reverse engineering" is equivalent to drawing a pentagram around something and saying "science can't go in here", and should be treated with the same disdain.


[/IP] permanent link

Wed, 06 Oct 2004

DMCA, Blizzard vs bnetd, Open Source in Australia

I am not a lawyer. However, I have read various summaries, and the actual results of Blizzard vs bnetd. Blizzard (owned by Vivendi, an old hand at aggressive copyright litigation) create various games which they sell; these games can be played online on the official Blizzard "Battle.net" servers. bnetd is an Open Source equivalent to these servers. This case has relevance for Australia, as the recent Australia-US Free Trade Agreement contains an obligation to implement similar restrictions to the US DMCA.

Judge Shaw found for Blizzard on two counts: (1) that the bnetd developers had broken the click-through license agreement (EULA) which prohibited reverse engineering, and (2) that they had broken the Digital Millennium Copyright Act (DMCA) by "circumventing" the "secret handshake" used by the game to talk to the server.

The EULA issue is clearly important: you should not be able to prohibit compatible products just by slapping a click-through license on something. Several academics and studies in Australia believe you should not be able to "contract out" of copyright exceptions on principle. To quote the Dee report commissioned by the Senate Australia-US FTA enquiry:

AUSFTA requires Australia to allow copyright holders to transfer such right by contract. The US Trade Advisory Group sees this as meaning that contracts will prevail over exceptions such as 'fair use'. While it is debatable whether the clause achieves this, it would contradict a recommendation of the Commonwealth Law Reform Commission that parties should not be allowed to contract out of exceptions.

But my main concern is the DMCA finding. Judge Shaw:

See Universal City Studios, Inc. v. Corley, 273 F.3d 429, 444 (2nd Cir. 2001) (court rejects argument that because DVD buyer has authority to view DVD, buyer has authority of copyright owner to view DVD in a competing platform; court finds that argument misreads § 1201(a)(3) because the provision exempts from liability those who would "decrypt"--not "use"-- an encrypted DVD with the authority of copyright owner). The defendants did not have the right to access Battle.net mode using the bnetd emulator. Therefore, defendants' access was without the authority of the copyright owner.

Here we see the legacy of the deCSS ban (Universal vs Reimerdes): you don't have implied "authority" for anything other than exactly what the author intended (ie. you don't have implied authority to play a DVD on any platform you want). Therefore, it's "unauthorized circumvention." The debate now, is whether the defendants can use the exception for reverse-engineering for interoperability which is in the DMCA (mirrored in the AUSFTA), under § 1201(f)(1):

Notwithstanding the provisions of subsection (a)(1)(a), a person who has lawfully obtained the right to use a copy of a computer program may circumvent a technological measure that effectively controls access to a particular portion of that program for the sole purpose of identifying and analyzing those elements of the program that are necessary to achieve interoperability of an independently created computer program with other programs, and that have not previously been readily available to the person engaging in the circumvention, to the extent any such acts of identification and analysis do not constitute infringement under this title.

Here's how Judge Shaw rejects the use of that exception:

It is undisputed that defendants circumvented Blizzard's technological measure, the "secret handshake" between Blizzard games and Battle.net, that effectively controlled access to Battle.net mode. It is true the defendants lawfully obtained the right to use a copy of the computer programs when they agreed to the EULAs and TOU. The statute, however, only exempts those who obtained permission to circumvent the technological measure, not everyone who obtained permission to use the games and Battle.net.

I do not understand the sentence "The statute, however, only exempts those who obtained permission to circumvent the technological measure, not everyone who obtained permission to use the games and Battle.net", because the DMCA exemption, like the FTA, says nothing about requiring permission to circumvent. A requirement that you ask permission to create interoperable products is, frankly, absurdly anti-competitive. We must not allow such an interpretation here in Australia.

The judge also finds that bnetd was not an independently created computer program, despite the fact that it was a completely independent implementation. This stands strongly against years of copyright law precedent, and I believe reflects the judge's disregard for Open Source (as we'll see explicitly later):

Finally, the defendants did not create an independently created computer program. The bnetd program was intended as a functional alternative to the Battle.net service. Once game play starts there are no differences between Battle.net and the bnetd emulator from the standpoint of a user who is actually playing the game. Based on these facts, defendants' actions extended into the realm of copyright infringement and they cannot assert the defenses under § 1201(f)(1).

It also seems, that by offering features which the official servers didn't, bnetd stepped outside the "sole purpose" of achieving interoperability:

The defendants admit that the bnetd project was to provide matchmaking services for users of Blizzard games who want to play in a multi-player environment without using Battle.net. The Court finds that the defendants' actions constituted more than enabling interoperability.

One of the nastiest charges is that bnetd was "trafficking" in circumvention devices. Under the AUSFTA (and presumably the DMCA) this carries criminal charges. The wording of the "trafficking" statute has three tests, one of which is that the program "has only limited commercially significant purpose or use other than to circumvent a technological measure". The judge found that an Open Source program, being freely available, meets that test:

The bnetd emulator had limited commercial purpose because it was free and available to anyone who wanted to copy and use the program.

What remains?

This decision, if it stands, effectively guts the DMCA exception for interoperability. If anything which effective replaces part of a system is not an "independently created work" for the purposes of the exception, then it's hard to see how any independent development can occur. If your replacement program allows anything the original vendor did not (the bnetd didn't check the authorization keys, as they didn't know how, and also provided a matchmaking service), your purpose "more than enabling interoperability" and the exception doesn't apply. The judge also noted that the existence of (Open Source) bnetd had spawned derived works which might do other things: this seems also to be "more than enabling interoperability". Finally, creating and distributing any Open Source program which circumvents anything is automatically a violation of 17 U.S.C. §1201(a)(2), as it has limited commercial purpose.

If nothing else convinces you that we need legislators to be explicitly aware of the importance of Open Source/Free Software when drafting legislation, this should.


[/IP] permanent link

Mon, 04 Oct 2004

So, there's been lots of talk of cryptographically-signed kernel modules. Security-wise, it's not a win by itself (you need lots of other things), but it does mean that you can prevent some trivial stuff. I've been playing with them.

I am not a lawyer, but the question that Andrew Tridgell raised was, does distributing cryptographically-signed modules violate the GPL, or weaken future legal defence against really noxious Digital Rights Manglement (DRM)? Linus has stated that he can't see how the GPL can effect such a thing, since the keys aren't a derived work, but I believe that's overly simplistic.

DRM methods are designed to restrict the execution of modified programs. The GPL is designed to ensure your right to modify programs (and, presumably, use them). So in simplistic terms, DRM is inimical to the GPL, and vice versa. But does the GPL say anything about it?

Section 3 of the GPL states what you must do if you want to distribute binaries of the GPL work. Someone distributing Linux to run on a computer clearly comes under this section, as does a Linux distributor shipping a pre-compiled Linux kernel. If that section says you have to do something, you have to do it to meet the license.

This section explains that you must provide the source code, in one of three ways. There are a few phrases in this section which attract attention:

  1. "Complete corresponding machine-readable source code",
  2. "The source code for a work means the preferred form of the work for making modifications to it", and
  3. "For an executable work, complete source code means... plus the scripts used to control compilation and installation of the executable."

Does the "complete source code" include the keys used to sign the executable so it can actually run? Does the "scripts use to control compilation and installation" include the keys? It's not spelled out. But "preferred form of the work for making modifications" strongly implies what is spelled out in the premble, which say the purpose of the license is to "guarantee your freedom to share and change free software".

So you could argue that if it's necessary for modification (either as part of the "complete source code" or "scripts used to control compilation and installation"), you have to offer it with the binary. That's clearly the intent of section 3, but I don't know enough to say if the wording is sufficient.

So, where are we with the kernel module-signing issue? This seems to be OK (at least, Eben Moglen says it's OK, and he's a real lawyer). As he says "It doesn't violate any principal of freedom". To quote him in more detail:

There are imaginable situations in which a private encryption key could be part of the "complete and corresponding source code" under section 3, if the code functionally cannot be built without it. But that's not this situation: a session key created during the course of the build is "provided" when the build ritual that creates and uses that key is provided.

[/IP] permanent link

Andrew Tridgell's talloc library is looking really interesting. I disagree with some of the details, but I converted nfsim to use it, and it made things a little nicer. I'd really like something more meaty to test it out on.
[/tech] permanent link

Sat, 02 Oct 2004

Chris Yeoh explains his vandalization of wikipedia. I think the joy of wikipedia is that anyone can add information; something precious will be lost if that becomes impractical because enough people are assholes.
[/IP] permanent link

Tue, 28 Sep 2004

Chris Yeoh vandalized Wikipedia. He did it anonymously, and no doubt if I managed to find it and fix it, he'd just make another incorrect change. I am horrified; feel free to criticise a project openly, but anonymously sabotaging such an effort reflects deep ethical flaws. Kind of like tearing pages out of library books.
[/IP] permanent link

Fri, 17 Sep 2004

Last two weeks through Germany and New York state taught me one thing I hadn't realized before: in Australia, the only place there is a stretch of green grass surrounded by trees is a golf course. Hence, the rest of the world looks like a series of golf courses. I keep expecting buggies and a flag.
[/self] permanent link

Fri, 10 Sep 2004

Yay! Working on netfilter code again. Some decisions made at the netfilter summit to simplify the code. In particular, we've decided to (try to) get rid of some complex code in the core. Firstly, it's time to remove the ipfwadm and ipchains backwards compatibility code. I had to provide a special interface half-way into the NAT and connection tracking code for these layers: getting rid of that will allow various cleanups. Secondly, NAT mapping to multiple ranges is a very rarely-used feature which complicates the code. It can be simulated with a random match which chooses different NAT rules for each connection, anyway, and it makes the core more complicated. Finally, for local Destination NAT, if we send the packet out a different interface, we also do Source NAT to match the interface address. This has always been questionable, and means that we now have multiple NATs on a single hook. Changing this is likely to break some setups, but many people do not enable local NAT anyway.
[/tech] permanent link

Thu, 09 Sep 2004

I was relatively happy with my Linux Kongress keynote; perhaps a little more polished than the version I gave at OLS last year. Once again, it was not recorded, as far as I know. But the slides (bzip2'd OpenOffice) are here because some people asked.
[/tech] permanent link

Fri, 03 Sep 2004

Good background reading on software patents: Wikipedia's entry on software patents. I am a huge Wikipedia fan, for those who didn't already know (I read it daily).
[/IP] permanent link

Thu, 02 Sep 2004

Tridge has an absolutely brilliant scheme for doing file indexing. We discussed it a while back, but I had forgotten (in fact, I can't remember which of us came up with the scheme; I'd like to claim it as mine only I can't believe I'd have forgotten about something that brilliant if I'd come up with it). I'd love to run off and implement it, but with my upcoming trip to Germany then the US, and all the normal demands on my time, I'm simply not going to have time. I'm sure whoever gets to it first will have a field day with it.
[/tech] permanent link

Wed, 01 Sep 2004

Draft of IP Policy Document

What would a sane IP policy for Australia look like? I don't know either, but just looking at the issue which overlap with the stuff Linux Australia and I care about, it might look like this. Comments welcome.
[/IP] permanent link

Mon, 30 Aug 2004

OK, since I was having a horrible day anyway (IBM network problems, brain still not fully on due to current cold), I decided to set up a standard Blosxom blog and take it for a spin, to see how it goes.

I've also been working on my keynote for Linux Kongress; I'm going through my OLS keynote from 2003, and strip-mining it for good stuff. The Kongress keynote is only 45 minutes (vs. 90 minutes for OLS), so ideally this should leave only the good stuff. I wish I had an audio recording so I could listen and know which bits didn't work.


[/self] permanent link

Sun, 29 Aug 2004

Anthony Towns has suggested I should use Blosxom for my blog; various people have asked for RSS feeds and permalinks, so it seems like a reasonable idea. So I looked at it, and there are several problems. The first is that it wants to run on the server, which is a complete waste of cycles: blogs should be generated statically and uploaded. You can run it in that mode, but then it generates broken URLs (I can force it, but all URLs should be relative so I can move my site and it doesn't break) and, mysteriously, gets the time on the entries wrong (no, it's not GMT, either).

Finally, it's aimed at "random thoughts" kind of blogs, not an online diary where the most important thing is the date on which the entry was written. So one entry per file doesn't really make any sense for this page, and a title line doesn't really make sense either. I don't really want to rewrite it, or write my own, so I'm debating what to do. One option is to simply run a script over this diary to produce the Blosxom entries, and generate a "pretty" diary from that. Of course, I like the idea of categories, so I'd have to insert markers in my blog for that.


[/tech] permanent link