<$BlogRSDUrl$>

Saturday, November 06, 2004

Stem Cell Research 

One major issue people love to tackle is Stem Cell Research. They're uninformed and have no idea what they're talking about, but they want to quickly blame the Republicans for banning Stem Cell Research. Recently, we've been able to narrow "Republicans" down to "Bush," so we can handle this as Bush Bashing.

Well, let's bring up some facts, and some speculation. I'll note that I'm not a biologyst; but I will explain where my logical conclusions come from for certain things. Don't argue with me unless you have a Ph.D. in biology and medicine; I'm not interested in bitchmatches with people who aren't qualified to refute my analysis.

I'll keep this simple and use a narrow range of resources. You can use google.

CLAIM: Bush banned Stem Cell Research; Bush wants to ban Stem Cell Research; Bush restricted Stem Cell Research; Bush is being an ass about Stem Cell Research

Example: http://www.msnbc.msn.com/id/6142664/
Found by: Google: bush ban stem cell research

Counter-example: http://www.usatoday.com/news/opinion/editorials/2004-08-15-stem-cells_x.htm
Found by: Google: bush fund stem cell research

FACT: Embreyonic Stem Cell Research was never banned by Bush. In fact, federal funding for such was never banned by Bush. Bush made it easier to fund Embreyonic Stem Cell Research.

First, from our example, we'll point out the major theme: "facing a president who is trying to ban research vital to finding new treatments and cures." These two quotes below we may start with.
On Aug. 9, 2001, in his first major speech to the American people, Bush announced that he had settled on what he called a "compromise" with respect to embryonic stem-cell research. No more embryos could be destroyed in research funded by the federal government. But, drawing what can only be generously described as an arbitrary moral line, Bush said he would allow research using embryonic stem cells already in existence as of the day of his speech.
And...

The president, they insist, compromised. "Use what was around before 2001" is not a ban, they say. And, they self-righteously point out, this president has allocated more funding — nearly $25 million — for embryonic stem-cell research than any other president.

Well, sorry fellas, but prohibiting the expenditure of federal funds on embryonic stem-cell research after August 2001 is a ban. It is a ban of limited scope but a ban it most certainly is.

OK. Let's step down to our counterexample now.
Three years ago, President Bush made the decision to open, for the first time, the laboratory doors to federal funding for human-embryonic-stem-cell research. He determined, however, that federal funds should not be used to encourage or support the destruction of living human embryos, a principle that has been part of federal law since 1996. Funds would be made available for research derived from embryos that had already been destroyed. He placed no limits on private funding of research.
The president's policy is working. Federal funding for embryonic-stem-cell research has grown from zero dollars in 2001 to $24.8 million now, with no cap on future funding. Most of the established U.S. scientists in this field have received funding, and shipments of stem-cell lines are going out to researchers in record numbers. More lines are available in the USA than in any other country.
At the same time, state governments and the private sector are supporting research outside the federal guidelines. One study estimates that 1,000 scientists at more than 30 firms spent $208 million experimenting on embryonic and adult stem cells in 2002.

Much important stem cell work is also being done without wrestling with the ethics of research on embryos. Last year, the National Institutes of Health (NIH) funded $190 million in "adult" stem-cell research on, for example, cells from bone marrow or placental tissue.
First thing to notice: There has been a ban on federal funding of Embreyonic Stem Cell Research since 1996. Clinton was in office for the two terms before Bush, 1992-2000. This puts the blame of the all-out ban on federally funding Embreyonic Stem Cell Research on the Democrats, the Clinton-Gore Administration.

Second thing to notice: Somebody else is funding Embreyonic Stem Cell Research. "Bush" is giving them $24.8M; but somehow they spend $208M. The money is coming from somewhere. Even if you want to attack the low level of funding--which is almost one quarter of the total spent in 2002--you have to realize that because of Clinton, they were getting ZERO from the fed for funding. You compare $24.8M with $0. Let's say, $24,800,000 vs $0. Which is larger, and by how much?

CLAIM: Embreyonic Stem Cells are a Magic Cure; Superman could/would have walked if we had funded Embreyonic Stem Cell Research; all disease will bow to the awesome bitchin' powers of our kick-ass Embreyonic Stem Cells when we behead King George II

Example: http://irregulartimes.com/paulsstoryshort.html
Found by: Google: stem cell cure bush ban

Counter-example: http://en.wikipedia.org/wiki/Stem_cell
Found by: Being able to use Wikipedia

FACT: We do not know enough about Embreyonic Stem Cells to know this; if we did, we wouldn't need to do a little thing called "research." Also, Adult Stem Cells provide many cures and may provide many more, without the need to destroy life or tackle piles and piles of political and ethical bullshit.

From our example, we can show the following:

My formerly athletic son has been crippled in the last eight months by an autoimmune arthritis for which there is currently no cure, only harsh medicines with dangerous side effects that may temporarily slow the inevitable progression of the condition.

Last summer he was a basketball player and laser tag champ, this summer he can barely hobble a few yards at a time with braces on his knees and a cane in his hand. The condition is entering his spine. He is in constant pain.

Cry me a river.

My son's suffering is apparently due to a deficiency in a kind of white blood cell that suppresses autoimmune reactions. It is called a T-regulatory cell. Autoimmune diseased mice have the same problem. Rockefeller University researchers can take a few of the scarce T-regs out of an autoimmune diseased mouse and grow millions and millions of identical T-regs on a bed of ESCs in just a few days. By putting these lab grown T-regs back into the mice, researchers have STOPPED the mice's autoimmune condition dead in its tracks.
This is something we know? OK, this is no longer research.
An AIDS patient for whom drugs no longer work, could be saved with this technique by growing unlimited amounts of his own CD4 cells. Pneumonia patients with a drug resistant bacterial infections could be given the specific kind of WBC that would kill that pneumonia bacteria. Patients sickened or dying of viral diseases like SARS, hepatitis, viral pneumonia, mononucleosis, herpes, chicken pox, even small pox could be cured by growing the right kind of WBC's in mass and giving them back to the patient.
OK, here's my meat for today. In case you're too lazy--and I don't blame you--to read this BS, I'll highlight the major points.

First off, his kid is sick, he has an autoimmune disease and yadda yadda he's in pain he's dying understand? This is horrible, yes, we know. It sucks to hear that, I'm sorry, my heart goes out to you, try and get some help.

Second, apparently we know we can take Embreyonic Stem Cells and make them fix this. This is done, as I understand, by growing white blood cells using the stem cells and reinjecting them into the body. OK, good, nice. We have a cure, yay, but it involves killing people, so we have a problem, because it's really tragic that we have to make such a harsh moralistic decision.

Third, and finally, he says these techniques can be used to cure AIDS, pneumonia, SARS, smallpox, etc etc etc things we don't like. This is all awesome, I love this, you love this, but we're faced with a moral dilema which now we have to try and solve. Obviously since we're sitting on the cure for AIDS today, we have to try and solve this quickly.

To start, let's take some quotes from my counter-example.


There are three types of stem cells: totipotent, pluripotent, and multipotent. A single totipotent stem cell can grow into an entire organism. Pluripotent stem cells cannot grow into a whole organism, but they can become any other type of cell in the body. Multipotent (also called unipotent) stem cells can only become particular types of cells: e.g. blood cells, or bone cells.
Short version: Totipotent can become a whole body; pluripotent can become anything; and multipotent can only become one or a handfull of things. Let's point out that "multipotent" Stem Cells come from source X, and can produce any cells of type M[X] where M is a list of lists of types of cells (blood, bone marrow, kidneys, brain, skin, etc) and M[n] is a list of types of cells producible from multipotent Stem Cell type 'n'.

Stem cells are also categorized according to their source, as either adult or embryonic. Adult stem cells have been successfully used in treatments for over one hundred diseases and conditions. The use of embryonic stem cells has not yet resulted in any successful treatments, although many researchers believe that they have great potential as the basis of treatments. Research with embryonic stem cells requires destruction of embryos, and is highly controversial because this is considered by some to be a form of murder.
OK, how sad, we have to kill people for Embreyonic Stem Cells, and apparently we don't actually have results (despite above claims by our attacker) yet. Now let's find what an Adult Stem Cell is before I tell you we don't have to kill people to get them--oh shit, sorry, spoiled the ending.
Blood from the placenta and umbilical cord that are left over after birth is a source of adult stem cells. Since 1988 these "cord blood" stem cells have been used to treat Gunther's disease, Hunter syndrome, Hurler syndrome, Acute lymphocytic leukaemia and many more problems occurring mostly in children. It is collected by removing the umbilical cord, cleansing it and withdrawing blood from the umbilical vein. This blood is then immediately analyzed for infectious agents and the tissue-type is determined. Cord blood is stored in liquid nitrogen for later use, when it is thawed and injected through a vein of the patient. This kind of treatment, where the stem cells are collected from another donor, is called allogenic treatment. When the cells are collected from the same patient they will be used on, it is called autologous.
So blood from the icky junk left over after the baby is ejected can be used to harvest Adult Stem Cells? Interesting, no killing. We can even hack up the cord to try and get some. These have also been used to cure a lot of diseases.

Stem cells can be found in adult beings. Adult stem cells reproduce daily to provide certain specialized cells—for example 200 billion red blood cells are created each day in the body. Until recently it was thought that each of these cells could produce just one particular type of cell—this is called differentiation (see Morphogenesis). However in the past few years, evidence has been gathered of stem cells that can transform into several different forms. Bone marrow stem cells are known to be able to transform into liver, nerve, muscle and kidney cells.

Adult stem cells may be even more versatile than this. Researchers at the New York University School of Medicine have extracted stem cells from the bone-marrow of mice which they say are pluripotent.

Point: We can get Adult Stem Cells from your body, or at least from a person's body; and some apparently can perform all of the functions we can perform with Embreyonic Stem Cells except for cloning; we can't clone you with Adult Stem Cells, just with Embreyonic Stem Cells. We can use Adult Stem Cells to allegedly regenerate any tissue in your body, however.

In fact, useful sources of adult stem cells are being found in organs all over the body. Researchers at McGill University in Montreal have extracted stem cells from skin that are able to differentiate into many types of tissue, including neurons, smooth muscle cells and fat-cells. These were found in "dermis", the inner layer of the skin. These stem cells play a pivotal role in healing small cuts.

In the same way that organs can be transplanted from cadavers, researchers at the Salk Institute in California have found that these could be used as a source of stem cells as well. Taking stem cells from the brains of corpses they were able to coax them into dividing into valuable neurons. However, whether they will function correctly when used in treatment has not yet been determined.

We can get Adult Stem Cells from all over your body; this widens the number of X indicies for the groups we can produce of M[X]. If we map out each X--each type of Adult Stem Cell--to each set M[X] that describes what it can transform into, we can find a large number of available cures which may actually allow us to use your condition to compute a set of values of X which will allow us to use your own tissue to heal you, even when some of our sources in you of Adult Stem Cells are damaged themselves.

Adult stem cells have been successfully used to treat thousands of patients and over one hundred diseases and conditions, while all attempts to use embryonic stem cells have failed, most commonly resulting in tumours. This fact has been used to argue that limited public health funds should focus on extending adult stem cell research success, until privately funded research on animal embryonic stem cells shows some results.
Translation for the weak minded: Embreyonic Stem Cells fail to work in every case tested on humans, whereas over one hundred diseases have been cured in thousands of patients using Adult Stem Cells. To put it simpler, Embreyonic Stem Cells are pure and total bullshit.

For over 30 years, bone marrow stem cells have been used to treat cancer patients with conditions such as leukemia and lymphoma. During chemotherapy, most growing cells are killed by the cytotoxic agents. These agents not only kill the leukemia or neoplastic cells, but also the stem cells needed to replace the killed cells as a patient recovers. However, if the stem cells are removed before chemotherapy, and then reinjected after treatment is terminated, the stem cells in the bone marrow produce large amounts of red and white blood cells, to keep the body healthy and to help fight infections.

Since the 1980s stem cells have been taken from the blood instead of the bone-marrow, making the procedure safer for older people. Although normally scarce, the number of peripheral blood cells can be increased by a course of drugs, which release the stem cells from the bone-marrow. These are removed before chemotherapy, which kills most of them, and are re-injected afterwards.

Adult stem cells have been successfully used to treat paralysis due to spinal cord injuries, Parkinson's disease and other illnesses.

Some cute things we've done with Adult Stem Cells. Notice the last line here, about spinal cord injuries and paralysis. Could we have made Superman walk before he died? Perhaps Reeve wasn't kept in the wheelchair by Bush?

Working with critically ill heart patients, researchers in Vienna have successfully used Mesenchymal stem cells to regenerate healthy new heart tissue. The stem cells were harvested from the patient's own bone marrow and injected into the ventricle. The heart is stopped for approximately two minutes to allow the stem cells to attach to the existing heart tissue. The patient is only under local anesthesia so that the surgeons can monitor how the lack of cerebral oxygen is affecting the patient. The heart is then restarted and incisions closed. The procedure is minimally invasive, as far as heart surgeories are concerned.

All of the patients that received the new treatment experienced repaired scar tissue and most had nearly complete return of proper heart function. As stated previously in the article, autologous stem cell implants such as these could alleviate legal and moral issues revolving around stem cell therapies. Type 1 Juvenile Diabetes could be cured with stem cells in the future.

Truth be told. This is research on Adult Stem Cells and it could cure cancer, heart disease, and Type 1 Diabetes.


So what should we do? I say we shitcan Embreyonic Stem Cell research and go with Adult Stem Cell research. Why? Have you not read my post? Fine, I'll recap:

  1. Embreyonic Stem Cell research has too many political and ethical issues surrounding it, and any attempt to actually work towards this stuff despite its potential will be clogged as the left attacks the right's attempts to aid it and then uses the stall against them, while the right refuses to really pull out all the stops.
  2. Embreyonic Stem Cell research has never helped anything. All uses of Embreyonic Stem Cells in patients have failed, and many have resulted in tumors. Embreyonic Stem Cells have apparently not even worked in animal research.
  3. Adult Stem Cells have been used to cure hundreds of diseases in thousands of patients, and are continuing to show results, not empty and mystical promise, in current research. Adult Stem Cell research has no annoying moral issues surrounding it, and can therefor not be used to support more political bullshit and is immune to the clogging resulting from trying to pass large amounts of fecal matter through the narrow piping that today's politicians have created.
Does this satisfy your thirst for blood? Go Adult Stem Cell research!


Thursday, May 20, 2004

Mixed Memory Allocator

A hybrid approach to glibc malloc()


Prefix



As we all know, modern computer systems use hundreds of megabytes of system memory to support their operation. On a shell, you can almost always stay under a hundred if you don't run any servers; but in GUI, you may reach 200, 300, 500MiB easily. Some applications use 20MiB, 40MiB, some start at 30 and grow to 80 or 100MiB, and stay there.

Most programs use a class of function calls in glibc known as the malloc() function calls. These include malloc(), calloc(), resize(), and free(). C++ programs use new[] and delete[], which are usually rooted in these functions, but not necessarily.

Because of the way the allocator works, it may be theoretically possible to increase its efficiency by anywhere from a minor, barely noticable amount to entire orders of magnitude. This would be done by attempting to balance the two methods used for allocation into a single, more effective method which neither wastes excess memory inherantly nor is forced to hold massive amounts of unused memory from the system.

Please note that the scope of this document is Linux only, and systems not supporting mmap() or equivalent calls will not be able to benefit from the new allocator scheme detailed within.

The Current Method



The malloc() class of functions has two current modes of operation, both flawed in different ways. The major method, used in most allocators on most platforms, uses the heap; while the othe method, available on Linux and other Unixes, uses the mmap() class of system calls.

Most calls to glibc malloc() functions for allocation use the heap. The simplest explaination of this method is that the heap shrinks and grows as needed. If there is not a segment of ram larger than or equal to the requested allocation, the heap is expanded via brk() to create space for the new segment. If there is free space on the end of the heap, brk() is used to shrink it, freeing the memory to the system.

The problem with the heap is that it may only exist as one large chunk. Its base (beginning) never changes, and its endpoint always must be no less than the last used piece of memory. This means that any holes in the heap are allocated, and unavailable to the system. They take up memory; and unused pages get swapped out while they're not in use, meaning that rather than just freeing and reallocating ram, a meaningless disk access must be done to give the ram back to the system and to reallocate it from the system.

One problem with the heap is that long-running applications can build up much intermittent fragmentation, and sometimes even spike for ram usage and leave junk at the top. Thus, a view of the heap may look as such for most of the application's running time:

[****---------------*------------*--*]

With extremely large calls, by default those >128KiB, malloc() will become a wrapper for mmap(). mmap() maps a set of physical pages to a set of virtual pages; but it always maps whole pages. The problem with mmap() is much clearer: If you map a byte, you use 4k. 10 1 byte allocations may look like this:

[*---][*---][*---][*---][*---][*---][*---][*---][*---][*---]

The advantage of mmap() is that any one of these can be freed as soon as it is no longer in use. Unfortunately, it still bloats the ramspace.

A Hybrid Solution



On systems supporting mmap(), a hybrid solution would most likely increase the efficiency of memory allocation. Instead of using the heap or mmap(), we could use mmap()ed segments that act as heaps; that is, create a bunch of "mini-heaps" by mmap()ing contiguous memory as needed. This still leaves us vulnerable to fragmentation within pages; but entire empty pages can be immediately freed back to the system. Also, complex mappings can alleiviate situations where a new page cannot be mapped "between" two old pages.

Instead of using the heap, malloc() could use mmap() to map private, read/write, anonymous pages with the MAP_FIXED flag. This will fail if the page cannot be mapped to the exact address specified. This would allow malloc() to react by mapping somewhere else.

The basic function would be augmented by mapping the same physical pages around in other areas. Consider the case:

[**--] ____ [-***]

Where we need 7 segments to scale on here. We could satisfy this easily:

[**--][----][-***]

Map another page between the original two. If this mapping fails, however, we need to do one of two things: Map 2 new pages somewhere; or, my preferred method, map the other 2 pages somewhere else where we CAN map a third inbetween:

[**--] ____ [-***]
[**--][----][-***]

This wouldn't map extra system memory, and so we'd save a page (3 available units, plus the 1 left in the page we didn't alloc, would be 4 units, one page in our example); however, you would have to track your mapped pages and note which are which, because the multiple mapping will make allocations to one area of virtal memory space affect another. Because the above and below first and third page are the same physical memory, we must treat them as a single area of ram in our allocator.

Handling and tracking these types of allocations would be complex, but the trade-off would be good. Consider:

[****][----][----][----][---*][----][----][----][*--*]

This is our original heap from the first example. This time, however, we can free some ram back to the system.

[****] ____ ____ ____ [---*] ____ ____ ____ [*--*]

Now, instead of using 9 pages, we use 3.

This hybrid allocator should fall back on the heap allocator if the system returns ENOMEM. ENOMEM indicates that either the system is out of memory, or that the process may not make any more memory mappings due to resource restrictions. Although running out of memory isn't recoverable, running out of your allotted memory mappings can and should be recovered from by using the heap as a fallback.

It would be very interesting to see a hybrid allocator included in glibc. It would take much coding, but the benefits could be very substantial if handled properly.

Saturday, May 15, 2004

Cascading encryption?

Let's do something nasty. . .



Thinking about IPSec, I came up with a funny thought. What if you had a database of known keys on each host, of any size, and could use them at random? That'd be a hell of a thing to crack, right? Well here's the idea in a nutshell.

First, we'll do a quick header. It'll look like the following:

[PSN][ESN][SourceAddr][DestAddr][Len]{[KEYSIG]}{{[Port][OtherHeaderData][Data]}}

First let's note how this works in theory.

PSN is the PhysicalSequenceNumber, the number of the packet.

ESN is the EncryptionSequenceNumber. The packets are encrypted based on a cascading encryption which depends on having ALL of the packets and decrypting from ESN0 forward.

SourceAddr and DestAddr are for routing. They tell where it came from, who it goes to. These are also critical.

Len tells the length in some way. For our purposes, it doesn't matter if they mean the length of the following data, the length of the whole packet, or some odd thing, as long as from this we get the last thing we need for routing.

Now, here's the fun part. KEYSIG identifies the key to be used, which has to be indexed in an identical database on each side. BUT! KEYSIG is encrypted by the key in [ESN]-1 AND THEN by the key in [ESN]-2 (not encrypted for ESN 0, encrypted with only ESN 0's key for ESN 1). So, you miss the KEYSIG for any sequence, you're screwed. Period.

The rest of the data is encrypted first by the key identified by KEYSIG, then by the key for [ESN]-1 (just KEYSIG for ESN 0). So, you have an O(N^2) probability of getting KEYSIG, or a O(N^N) probability of getting the actual data, assuming you know ALL of the keys, for any given packet. Missing the key for a KEYSIG? Your man-in-the-middle attack ends.

If your host doesn't have the KEYSIG, you can send back a packet which requests the key for KEYSIG using the signatures from the ESN immediately preceeding it. Question arises here: With a perfect man-in-the-middle, can we ensure that this is possible without letting a M-I-T-M fake it?

The idea here is that each host would have hundreds of keys acquired at different times. The cascading application of keys one atop the other from the past 2 packets and the encryption of KEYSIG itself makes it impossible to miss a packet and decrypt the rest of them, even if you posess ALL keys.

The KEYSIG for each packet should be completely random.

Sunday, May 02, 2004

I gave my aunt Gentoo

Converting the Non-Savvy



About a week ago, my aunt had some problems with her computer. Windows 2000, 4GiB hard drive (amazingly, 4.00GiB, not 4.00GB), lots of ram. It was loaded with spyware, including a popup blocker/antivirus malware program (a well designed trojan). On top of that, the hard drive was 80% full, and 79% fragmented files, some with hundreds of thousands of fragments. So I did what anyone stuck with Windows would do: ran ad-aware to dump all the spyware, and defragmented the hard drive. That. . . kind of fixed it.

Anyway, she asked me if all that was stuff I did at my house, and i took the opportunity to point out that I don't use Windows. So we got into a little conversation about Linux, and she started asking what it was. I tried to explain it out, and told her that it'd be easier to show her than to try and explain it, and easiest if she had a spare machine.

. . . ever meet someone who knows nothing about computers but just happens to have 1Ghz pentium 3's lying around in odd places? They're out there, it's scary.

So I borrowed it for a week. It had 256M of ram and a 20GB hard disk. I put 8GB on /, 2GB on swap, and 10GB on /data. / and /data are reiserfs. I set it up with gentoo-dev-sources, supermount and all; and ~x86 Gentoo. It took a week, yes. A LONG ASS TIME.

I equipped that computer with AbiWord, Gnumeric, Gnome2, KDE 3.2.2, XFCE4, Mozilla Firebird, gDesklets, X-Chat 2, abcde, vorbis tools, Anjuta, XMMS, GnomeMeeting, Gaim, Gimp 2.0.1, and xorg-x11. I gave it back to her and showed her the ropes.

First, I showed her how to use LeftCtrl+LAlt+F* to switch terminals. I then explained that GDM starts three (3) displays, and that it invariably leaves itself on the last. These three are on L[C-A]-F[7-9], and so the display automatically starts on TTY9. She grasped these concepts with relative ease; there were no questions, and she absorbed it quite easily.

While she was off doing other things, I quickly set up Gnome2 with the Crux theme, 8x2 virtual desktops, and the alien-night wallpaper from KDE; and turned off spatial browsing on Nautius. Then I put the GoodWeather and LTVariations CPU, Memory, and Disk monitor desklets on her desktop, and set up GoodWeather for her area. When she came back, I explained that the GoodWeather desklet would constantly get current weather conditions from the net every 10 minutes. She thought that was cool. :) I then showed her the virtual desktops, one of which was running Mozilla Firebird. She thought that was cool as well. She says 'cool' a lot. :/

Next, I walked her through Webmin's user adding process. She grasped the process with ease. It gave me an opportunity to explain the DAC system as well; it set the wrong perms (755) on the new user's home directory. So I used chmod -R to change those, and then explained why I set them the way I did. I then showed her her own home directory permissions in Nautilus' properties window, and explained what each meant. Then I showed her how the DAC would block access to her folder from the new user's login, and vice versa. After she had grasped that (it only took one run through the explaination), I explained the further implications of the DAC: since the entire core system is only writable by root, viruses and trojans have no way to spread; and stupid users can't screw with your system. She liked that. :)

I didn't show her how to shut it down, but she said not to worry about it because she never turns 'em off anyway. For now, she seems to grasp the DAC, the switching between display managers, and virtual desktops pretty easily. Once she's got the net up, I'll show her how to turn on DHCP for her adaptor, update her portage tree, and upgrade everything. In the mean time, she said she'd just mess with it (hey, there's menus there) and see what it can do.

Tuesday, April 06, 2004

Confusion


Why are there standards?



I'm using the draft standard of RBAC until I can afford the $18 for the rbac standard pdf. I'm confused on a lot of unclear things and plain backwards contradictions in the standard draft, and thus I won't be putting out anything "stable" until I get the final standard.

Why do standards cost money anyway? Is this why many people don't follow them? Look at C compilers. Most do something weird, even though they get very close to C/C++ compliance. Maybe nobody wants to pay for factual information :P At any rate, it's copyright and up for download at a price, so I can't go grabbing it for free; that would be unethical. I use all free software. Still, I question the motives.

I might also note that my software won't interact with other software made to conform to the standard, so I don't have to follow the standard. However, it makes a good guideline, and I'd like to be true to form. i guess as long as it's well documented, I can prop it up on my own input; but that would feel somehow tainted. Oh well ;P We'll see what happens.

Sunday, April 04, 2004

RBAC and U . . . RPID


Fun with Security



I'm bored again. Putting my filesystem aside, ANLFS aside, my P2P design aside, everything aside, I'm working on something bigger.

http://usrbac.sf.net is the homepage of USRBAC, an implimentation of the RBAC standard developed by NIST (particularly, Rick Kuhn). USRBAC is a three part project. The first two parts, KURBAC and rbacd, work together to allow the kernel to use a userspace daemon as an access control system, with the goal being full RBAC compliance in the rbacd. The third part is what I'm most interested in, and starts only after rbacd is finished.

The third part is called URPID, or User-Role Profiling for Intruder Detection. It gathers data about a user during a session and profiles the user with it, then uses this data to sort of fingerprint the user. The goal is to anylize many aspects of the user and set off a security fault if the user appears to not be the owner of the account.

The very first URPID module is going to be urpid_hitmiss. A "Hit" here is defined as attempting a permissible operation, i.e. performing an operation on an object when you are allowed to perform that operation on that object. A "Miss" here is defined as attempting a nonpermissable operation, i.e. performing an operation on an object when this permission is not granted to you by any active role. The Hit/Miss analysis takes three pieces of data into account:

- A Base Tolerance: How many times can the user "miss" before we question his identity?
- A Hard Tolerance: What percent of the user's activity can be "misses" when he's suspect of being an account hijacker?
- Permission Checking: How many times has the user hit, versus how many times he hit?

The basic concept is that until the session has [Base Tolerance] misses, hits and misses are just counted without any concern. Once the [Base Tolerance] is reached, each miss results in a computation of the percentage of misses out of all tallied operations. If during any of these computations the percentage falls above [Hard Tolerance], a security fault is raised.

When a security fault is raised under URPID, the user's session is locked (all processes are frozen/paused). The user's console is watched for the CTRL+C control to be passed, and if it is passed, the user's session is killed. We call willful killing of one's own session during a security fault "Submission."

To resolve a security fault without submission, a given number N of users from a given role R must assert that the user in session S is in fact the given user. If any users in role R cast doubt (vote no), the session remains frozen until those users remove their doubt. If N users in role R assert that the session is not hijacked, and nobody casts doubt, the session is marked as being known to be posessed by the user, or "Clean," and no further security faults are raised for the session (except for modules that may raise faults in known clean sessions, such as those that would halt dangerous/destructive operations and call for authorization). However, if N users in role R assert that they do NOT believe this is the user, the session is terminated. If both sides reach N votes from role R, the session remains in deadlock until the user submits. If N votes from role R say the session is not clean, and less than N say the session is clean, the session is terminated.

When a session is terminated, by submission or by force, all gathered profiling data is discarded; no use profiling the user based on a hijacker's actions.

Hit/Miss tracking is theoretically effective if and only if the hijacker has little data about the user. In cases where the account is fully hijacked (all roles known, all passwords known, little to no security misess raised), this would be ineffective. Other modules must be designed to profile the users' activities and attempt to discover hijackers.

The URPID project is far off; I have to get KURBAC and rbacd working first. It will be a fun and interesting project for me.

Saturday, January 31, 2004

OOFS


Filesystems Revisited



I still haven't reworked my disk based FS into an object oriented FS. Object Relational Location Filesystem (ORLFS) will work as follows though:

Header Area
|--Superblock
|----Address of Object 0
|------Block containing Object 0
|------Byte Offset in block
|----Object for Beginning of Inode List
|--Journal List
|----Objects that function as Concatonated Journals, in order
Data Area
|--Objects

The idea is to allow objects to freely move around. Objects are referenced by their object id, which is used to locate the block and byte offset in the block at which the object resides. The object itself has its object ID embedded in its header, as well as the object ID of the previous and next physical objects. The access methods required to use this costs disk reads and seeks, but it provides a large amount of flexibility. Filesystems can easily be fscked, resized, defragmented, cyphered, encrypted, and compressed.

The entire disk is used by objects. Freespace is free-marked objects. Directories are directory-marked objects. Files are file-marked objects. Objects are split and redone as necessary, usually this would be free space objects losing pieces and/or becoming multiple objects. Objects do not have to be in order on the disk.

Here is an example:

{0000: Object list}{001A: Free space.................................}{0CF4: / inode directory entry}{0001: /boot/bzImage-2.6.1..................................}{002A: journal chunk............................}{19F7: Free space.............................................................................}


Now we can add a small file:

{0000: Object list}{001A: Free space}{0024: /a.out}{0025: free space}{0CF4: / directory{0001: /boot/bzImage-2.6.1..................................}{002A: journal chunk............................}{19F7: Free space.............................................................................}

Next, we can defrag:

{0000: Object list}{0024: /a.out}{0CF4: / directory{0001: /boot/bzImage-2.6.1..................................}{002A: journal chunk............................}{19F7: Free space.......................................................................................................................................}

A few bytes in the object link change in each of these operations, as well as forward/backward pointers to the next/previous objects. Of course, to move a journal chunk, you need to start a second journal chunk, flush everything from the first, and then free the first.

This will make on-the-fly shrinkage/growth and defragmentation easy, and will leave objects fairly managable. Going to any object is a matter of jumping to the Object List Index of that object (1 seek/read for a good driver); and reading that object off the disk (1 more seek/read for a good driver). This means O(2) to read any object based on knowing its object ID; and O(2d) to read a directory entry, assuming none of the directroy objects along the way are fragmented. So, /usr/src/linux/Makefile (first fragment only) would be read off disk after 10 seek/read pairs (/, /usr, /usr/src, /usr/src/linux, /usr/src/linux/Makefile; 2 for each) 190-210 mS for a 5400 RPM drive (19-21 mS seek time).

Sound plausable?

This page is powered by Blogger. Isn't yours?