Archive for the ‘Uncategorized’ Category

Three days of silliness

Friday, May 11th, 2007

Well, I’ve started implementation of both variants of interrupt handling. First in real mode, second one in supervisor mode.

I’m such idiot :) tryed to use call gate to change privelege level from ring 0 to ring 3. Digged memory dumps trying to understand why code causes #GP. Fixed stack changing issues (that was first reason of fault), made correct call gate installation… but problems was still here with me every day and a huge bit of night. the reason is simple, lcall via callgate with decreasing of privelege level is impossible. Hmm. Next time I’ll read Intel specs more times per day even if it’ll consume time I’m spending on reading of Ray Bradbury’s stories. But anyway, this my stupidity helped me to get along with memory dumps in virtual machine and improved my knowledge of AT&T syntax.

The other way of handling, I’m also implementing (may call it “main branch” of project) – handling in real mode via catching reflected to vm86 mode interrupts from vm86 monitor. It’s adopted from Intel PXE SDK with slight differencies.

The main thing to think is interface to packet receiving.

Real mode code now assumes, network operations are performed in cycle from start to end of interexchange operation without interrupting for other needs. Something like this pseudocode (not formatted well, but it’s best I could get…):

pxe_poll() {

if (0 == __pxe_isr_occured)
{ return 0; }

if (pxe_packet_recv())
{ return 1; }

return 0;
}

while(1) {

if (pxe_poll())
{ do_recieve(); }

if (received_all)
{ break; }

check_resend_needed();
}

What problems are here? If we use TCP connection and recieve_all triggered it doesn’t means sender have received our ACK for last needed packet and that he received FIN. After receive_all is true, we begin next stage of working with this file and meanwhile sender may continue sending packets (he lost ACK and FIN for some reasons…), this packets will spam during some time NIC’s receiving queue and will cause queue overflow (the same if we are not checking if we received packet within reasonable time interval). Well, may be it’s not such big thing for http, but anyway is not very clean and shiny solution.

After understanding problem with privilege level and call gates, I’ve thought about a little bit another way.

ISR is executed in ring 0 (CPL0) and performs all recieve/send/resend operations, using user data selectors and code in userspace. If it’ll work, of course. After call gate issue, I’m rather cautious in my thoughts.

So this code will handle interrupt, get all packets that suit to installed packet filters. Packet filters are installed via pxe_socket() calls. If packet fits filter, than it’ll be stored in packet queue (currently it is handled in pxe_core), if not it is dropped. So, tcp related code must also be started in ring 0 (thanks to gods, UDP doesn’t need this). Usercode will query ring 0 via calls or may be via direct access to structures in userspace. the first case is more simple to synchronize packet adding/removing from pxe_core packet queue.

Well, it’s big enough post, it’s time to finish it. And start implementing further more of ICMP (the main goal now to achieve before beginning of summer of code).

Next thing to understand

Sunday, April 29th, 2007

Now, path to the result becomes cleaner, but harder :)

The main thing to do now – install interrupt handler for NIC. Implemented by BTX vm86 monitor reflects interrupts to vm86 task. So user client application (BTX client) must find which interrupt number is needed (PXE API provides such information), and install handler for it in which call pxe_core_isr().

Installation phase is sophisticated, cause we need to change IDT and client code is executed not in ring 0 as I understand. So, there is SYS_EXEC system call (__exec() ), but according to assembler code there must not be return from code, run by this call, otherwise SYS_EXIT executed. In my case I only need to install handler, and that’s all. If I use __exec(), after IDT modification the only process flow is exit() , or stay forever in ring 0, while PXE related code will be running. The last case is not what I want, cause idea is to save as much boot functionality as possible (and thus to return control to BTX client code).

So, what I need to correctly install interrupt handler? That is the question. If I understand correctly – the one way is to modify btx.S and add another syscall, that allows to install interrupt handler (something like super2user_isr_install(), that installs handler, which calls user defined function in user space.).

Well, there is also another way – use reflected interrupts, and add handler (modify interrupt vector table) in vm86 space. In that case there must be some call to notify user code (not in vm86, but in ring 3) that interrupt occured. That is also needs system call (something like real2user() and user2real() . vm86int() does sequentially pseudocode user2real(), real_call(), real2user(), but it’s needed to be possible to make this code backwards).

I guess, if somebody ever needed installing interrupt handler in BTX client. If I’m understanding all correctly, then first variant with super2user_isr_install() seems better solution than second one.

BTX & etc

Wednesday, April 18th, 2007

The real one question, to find answer to, was: use PXE in protected mode (which is hazy documented in Intel specs and PXE SDK), or do everything in real mode, except memory copy routines, which start protected mode and end it after copy operation (this case is good, but limits usage of commonly used fuctions, that I wanted to use from libstand).

Now I am thinking, the best way – to use BTX and virtual 86 mode or do something similar on basis of BTX v86 monitor and protect mode initialisation.

So, here again to ways:

1. implement “disk via http” for loader (as PXE here: libi386/pxe.c ) and use tcp lib for that.

2. make independent solution based on tcp lib and initialisation code similar to BTX, but it needs more reading, how kernel loads if no drive is specified.

common in this two cases is – tcp lib, so I’ll start from this. Then best choice to do code in common way “disk via http”, that will test main tcp lib code work and usability. And then will be time to think about independent solution, which requires more information about kernel arguments and more understanding of bootinfo struct usage.

Although libstand will be used, calls to it functions (memory allocation, timer, console out) will be through wrappers with `pxe_` prefix. That will be useful, if somewhere in future I’ll decide not use libstand, also it may be more flexible for porting or code usage in other OS, which have no libstand.

Well, libi386/pxe.c already uses such prefix (pxe_open(), pxe_enable() and etc), but I think usage will be different and there would not any naming problems. For “drive over http” will be used `pxe_http_` prefix.

I think first recieve/send buffers will be static to avoid dynamic allocation. In case of this project it’s possible, cause pxe_http code must handle practically one connection most of time.

Hm, today I’ll try to include pxe_http dummy support to loader and look how it works. After success I’ll sync some files via perforce. And we’ll see if I understand tuning p4 client correctly.

First post

Monday, April 16th, 2007

Well, I’ve just created this blog and will try to drop here some lines every few days. Mainly, here’ll be my thoughts about tcp implementation in preboot environment and some notes about current state of project.
Now it’s a plenty of time before deadline, so I’m reading documentation and getting along with Perforce. I am looking now to syslinux implementation of some PXE functions and Intel PXE 2.1 specification.

Also, it seems good idea to use libstand (or parts of it) for common functions, so I’m digging what is possible with it.