[7823] in linux-scsi channel archive

home help back first fref pref prev next nref lref last post

SCSI queueing status.

daemon@ATHENA.MIT.EDU (Eric Youngdale)
Thu Jan 13 12:59:04 2000

Message-ID: <007601bf5d81$3a0d55a0$2017a8c0@eric.home>
From: "Eric Youngdale" <eric@andante.org>
To: <linux-scsi@vger.rutgers.edu>
Date:   Wed, 12 Jan 2000 23:46:59 -0500
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_0072_01BF5D57.511A9DE0"

This is a multi-part message in MIME format.

------=_NextPart_000_0072_01BF5D57.511A9DE0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_001_0073_01BF5D57.511A9DE0"


------=_NextPart_001_0073_01BF5D57.511A9DE0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

    Just thought I would drop a note and tell people where things stand.

    For the most part, things have been kind of quiet lately.  I am =
trying to polish off a few remaining issues.  I think I have gdth =
licked, and I think I have the host blocking feature added back again, =
and at this point I am waiting for the people testing.   In the =
background I am starting to clean up a few other things - I have split =
part of scsi.c into a new file (and in the long run, there will probably =
be several others as well).  Unfortunately scsi.c has served as sort of =
a repository for a bunch of loosely related things, and both readability =
and maintainability have suffered.  I am appending the current patchset, =
in the event that anyone else wants to look at it.

    I was talking with someone about how to implement automatic spinup =
of spun down drives, and I realized that there was a design flaw in the =
new queueing code.  Basically as things stand currently we pre-allocate =
Scsi_Cmnd structures based upon the queue depth for the device.  For a =
disk with a queue depth of 1 there will be only one Scsi_Cmnd structure =
dedicated to this device.  When a user does something which would cause =
an ioctl to be generated, the Scsi_Cmnd structure is allocated in the =
ioctl code, and then passed via scsi_do_cmd() or scsi_wait_cmd() into =
the mid-layer.  As it turns out, this will be inserted into the queue at =
the end.

    The queue handler function currently only looks at the head of the =
queue.  It keeps queueing things as long as resources are available to =
queue more commands.  One of the resources that gets used up of course =
is the supply of Scsi_Cmnd structures.  It is basically assumed that all =
of the structures that are allocated are essentially queued to the =
device, and if we run out, the queue handler can simply return without =
doing anything.  The idea is that when one of the commands that is =
currently running completes that we would make another stab at queueing =
the next request at the head of the queue.

    Consider a disk that is fairly active, and with a queue depth of 1.  =
Imagine that a user issues an ioctl - this will allocate the only =
Scsi_Cmnd structure, and drop it at the tail of the queue.  The =
scsi_request_fn() function will look at the head of the queue, and be =
unable to queue that request because there are no Scsi_Cmnd structures =
available, and return.  Hence a sort of deadlock will arise.

    My guess at the moment is that this situation isn't all that common =
- nobody has reported it yet, but it is only a matter of time before =
somebody stumbles across it.   The reason it hasn't come up is that it =
would only appear for a disk or a cdrom that is fairly busy, has a small =
queue depth, and for which an ioctl is issued.  The trick of course is =
to figure out how to fix it. =20

    In the short term, I could easily add a simple hack to detect this =
situation and recover from it.  No big deal here, and until a more =
architecturally satisfying solution arises, this will do.

    In the longer term, there are several things that come to mind.  =
First of all, upper level drivers probably don't actually need a =
Scsi_Cmnd structure for anything except for completion processing.  As =
things currently stand, an upper level driver simply allocates the =
thing, and then calls scsi_do_cmd/scsi_wait_cmd without having done much =
to the Scsi_Cmnd structure at all.  Once the command is complete, then =
the upper level driver usually releases it after looking at the status =
code.  There are a handful of things that the upper level driver =
actually needs - usually just the completion code and in some cases the =
sense data in the event that something went wrong.  I could come up with =
something analagous to scsi_do_cmd or scsi_wait_cmd that doesn't even =
take a Scsi_Cmnd structure, but at this point I don't know to what =
degree it is possible to replace scsi_do_cmd/scsi_wait_cmd.  If a 100% =
replacement is feasible, then this would be a complete solution.

    The second thought that comes to mind is that we really need to get =
away from pre-allocating Scsi_Cmnd structures for devices.  There should =
be a general purpose pool allocated for the host instead (to prevent =
starvation, there might be one Scsi_Cmnd pre-allocated per device, and =
then the rest come from a general purpose pool, but that's just an =
idea).   The general idea is that managing the queue depth would be done =
at a different level, so that it is less likely that we get into the =
trouble I described above.  While we do have a known problem of =
overallocating Scsi_Cmnd structures,  I don't like the idea of =
allocating consumables if they are not needed right away, and this would =
be more of a band-aid.

    The last possibility is that for the case of things like ioctl it is =
*possible* that we could allocate an additional Scsi_Cmnd from the =
general kernel memory pool and hook it in.  In general you don't want to =
call kmalloc() anywhere that is in the code path during I/O operations =
that might take place during swapping, but ioctl generally won't fall in =
this category.  I am not that wild about this one.

    As things stand I am still thinking about it.  A quickie hack type =
of workaround certainly can be done to prevent any user-visible =
symptoms, but at this point I am trying to decide what would make the =
most sense from an architectural point of view.  I won't bother to code =
up the workaround unless people start to experience this problem in the =
field.   Unless I am hit with a brilliant flash of insight, I will =
probably play with the first possibility first and see how much trouble =
it gets me into.

-Eric


------=_NextPart_001_0073_01BF5D57.511A9DE0
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Diso-8859-1" =
http-equiv=3DContent-Type>
<META content=3D"MSHTML 5.00.2919.6307" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; Just thought I would =
drop a note=20
and tell people where things stand.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; For the most part, =
things have=20
been kind of quiet lately.&nbsp; I am trying to polish off a few =
remaining=20
issues.&nbsp; I think I have gdth licked, and I think I have the host =
blocking=20
feature added back again, and at this point I am waiting for the people=20
testing.&nbsp;&nbsp; In the background I am starting to clean up a few =
other=20
things - I have split part of scsi.c into a new file (and in the long =
run, there=20
will probably be several others as well).&nbsp; Unfortunately scsi.c has =
served=20
as sort of a repository for&nbsp;a bunch of loosely related things, and =
both=20
readability and maintainability have suffered.&nbsp; I am appending the =
current=20
patchset, in the event that anyone else wants to look at =
it.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; I was talking with =
someone about=20
how to implement automatic spinup of spun down drives, and I realized =
that there=20
was a design flaw in the new queueing code.&nbsp; Basically as things =
stand=20
currently we pre-allocate Scsi_Cmnd structures based upon the queue =
depth for=20
the device.&nbsp;&nbsp;For a disk with a queue depth of 1&nbsp;there =
will be=20
only one Scsi_Cmnd structure dedicated to this device.&nbsp; When a user =
does=20
something which would cause an ioctl to be generated, the Scsi_Cmnd =
structure is=20
allocated in the ioctl code, and then passed via scsi_do_cmd() or=20
scsi_wait_cmd() into the mid-layer.&nbsp; As it turns out, this will be =
inserted=20
into the queue at the end.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; The queue handler =
function=20
currently only looks at the head of the queue.&nbsp; It keeps queueing =
things as=20
long as resources are available to queue more commands.&nbsp; One of the =

resources that gets used up of course is the supply of Scsi_Cmnd=20
structures.&nbsp; It is basically assumed that all of the structures =
that are=20
allocated are essentially queued to the device, and if we run out, the =
queue=20
handler can simply return without doing anything.&nbsp; The idea is that =
when=20
one of the commands that is currently running completes&nbsp;that we =
would make=20
another stab at queueing the next request at the head of the =
queue.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; Consider a disk that =
is fairly=20
active, and with a queue depth of 1.&nbsp; Imagine that a user issues an =
ioctl -=20
this will allocate the only Scsi_Cmnd structure, and drop it at the tail =
of the=20
queue.&nbsp; The scsi_request_fn() function will look at the head of the =
queue,=20
and be unable to queue&nbsp;that request because there are no Scsi_Cmnd=20
structures available, and return.&nbsp; Hence a sort of deadlock will=20
arise.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; My guess at the =
moment is that=20
this situation isn't all that common - nobody has reported it yet, but =
it is=20
only a matter of time before somebody stumbles across it.&nbsp;&nbsp; =
The reason=20
it hasn't come up is that it would only appear for a disk or a cdrom =
that is=20
fairly busy, has a small queue depth, and for which an ioctl is =
issued.&nbsp;=20
The trick of course is to figure out how to fix it.&nbsp; </FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; In the short term, I =
could=20
easily add a simple hack to detect this situation and recover from =
it.&nbsp; No=20
big deal here, and until a more architecturally satisfying solution =
arises, this=20
will do.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; In the longer term, =
there=20
are&nbsp;several things that come to mind.&nbsp; First of all, upper =
level=20
drivers probably don't actually need a Scsi_Cmnd structure for anything =
except=20
for completion processing.&nbsp; As things currently stand, an upper =
level=20
driver simply allocates the thing, and then calls =
scsi_do_cmd/scsi_wait_cmd=20
without having done much to the Scsi_Cmnd structure at all.&nbsp; Once =
the=20
command is complete, then the upper level driver&nbsp;usually releases =
it after=20
looking at the status code.&nbsp; There are a handful of things that the =
upper=20
level driver actually needs - usually just the completion code and in =
some cases=20
the sense data in the event that something went wrong.</FONT><FONT =
face=3DArial=20
size=3D2>&nbsp; I could come up with something analagous to scsi_do_cmd =
or=20
scsi_wait_cmd that doesn't even take a Scsi_Cmnd structure, but at this =
point I=20
don't know to what degree it is possible to replace=20
scsi_do_cmd/scsi_wait_cmd.&nbsp; If a 100% replacement is feasible, then =
this=20
would be a complete solution.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; The second thought =
that comes to=20
mind is that we really need to get away from pre-allocating Scsi_Cmnd =
structures=20
for devices.&nbsp; There should be a general purpose pool allocated for =
the host=20
instead (to prevent starvation, there might be one Scsi_Cmnd =
pre-allocated per=20
device, and then the rest come from a general purpose pool, but that's =
just an=20
idea).&nbsp;&nbsp; The general idea is that managing the queue depth =
would be=20
done at a different level, so that it is less likely that we get into =
the=20
trouble I described above.&nbsp; While we do have a known problem of=20
overallocating Scsi_Cmnd structures,&nbsp; I don't like the idea of =
allocating=20
consumables if they are not needed right away, and this would be more of =
a=20
band-aid.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; The&nbsp;last =
possibility is=20
that for the case of things like ioctl it is *possible* that we could =
allocate=20
an additional Scsi_Cmnd from the general kernel memory pool and hook it=20
in.&nbsp; In general you don't want to call kmalloc() anywhere that is =
in the=20
code path during I/O operations that might take place during swapping, =
but ioctl=20
generally won't fall in this category.&nbsp; I am not that wild about =
this=20
one.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>&nbsp;&nbsp;&nbsp; As things stand I am =
still=20
thinking about it.&nbsp;&nbsp;A quickie hack type of&nbsp;workaround =
certainly=20
can be done to prevent any user-visible symptoms, but at this point I am =
trying=20
to decide what would make the most sense from an architectural point of=20
view.&nbsp; I won't bother to code up the workaround unless people start =
to=20
experience this problem in the field.&nbsp;&nbsp; Unless I am hit with a =

brilliant flash of insight, I will probably play with the first =
possibility=20
first and see how much trouble it gets me into.</FONT></DIV>
<DIV>&nbsp;</DIV>
<DIV><FONT face=3DArial size=3D2>-Eric</FONT></DIV>
<DIV>&nbsp;</DIV></BODY></HTML>

------=_NextPart_001_0073_01BF5D57.511A9DE0--

------=_NextPart_000_0072_01BF5D57.511A9DE0
Content-Type: application/octet-stream;
	name="linux39a.diff"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
	filename="linux39a.diff"

Index: linux/Documentation/Configure.help
diff -u linux/Documentation/Configure.help:1.1.1.3 =
linux/Documentation/Configure.help:1.2
--- linux/Documentation/Configure.help:1.1.1.3	Fri Jan  7 22:33:08 2000
+++ linux/Documentation/Configure.help	Mon Jan 10 21:45:08 2000
@@ -4103,6 +4103,17 @@
   so most people can say N here and should in fact do so, because it
   is safer.
=20
+Enable host blocking to help with buggy DMA chipsets
+CONFIG_SCSI_HOST_BLOCK
+  Some ISA DMA chipsets are buggy in the sense that if there is more
+  than one ISA DMA busmaster active at the same time that the system
+  becomes unstable.  In order to ensure stability, the host block
+  feature was added which ensures that only one such card has active
+  commands at one time.  You only need this if you have more than
+  one ISA busmaster on your system - this does not apply to PCI
+  hosts, and it certainly doesn't apply if you only have one
+  SCSI host adapter.
+
 Verbose SCSI error reporting (kernel size +=3D12K)
 CONFIG_SCSI_CONSTANTS
   The error messages regarding your SCSI hardware will be easier to
Index: linux/drivers/scsi/Config.in
diff -u linux/drivers/scsi/Config.in:1.1.1.3 =
linux/drivers/scsi/Config.in:1.2
--- linux/drivers/scsi/Config.in:1.1.1.3	Sat Dec 25 12:58:42 1999
+++ linux/drivers/scsi/Config.in	Mon Jan 10 21:44:27 2000
@@ -18,6 +18,7 @@
  =20
 bool '  Verbose SCSI error reporting (kernel size +=3D12K)' =
CONFIG_SCSI_CONSTANTS
 bool '  SCSI logging facility' CONFIG_SCSI_LOGGING
+bool '  Enable host blocking to help with buggy DMA chipsets' =
CONFIG_SCSI_HOST_BLOCK
=20
 mainmenu_option next_comment
 comment 'SCSI low-level drivers'
Index: linux/drivers/scsi/Makefile
diff -u linux/drivers/scsi/Makefile:1.1.1.4 =
linux/drivers/scsi/Makefile:1.2
--- linux/drivers/scsi/Makefile:1.1.1.4	Thu Jan  6 01:53:07 2000
+++ linux/drivers/scsi/Makefile	Mon Jan 10 21:44:27 2000
@@ -41,7 +41,7 @@
   endif
   L_OBJS +=3D scsi_n_syms.o hosts.o scsi_ioctl.o constants.o scsicam.o
   L_OBJS +=3D scsi_error.o scsi_obsolete.o scsi_queue.o scsi_lib.o=20
-  L_OBJS +=3D scsi_merge.o scsi_proc.o
+  L_OBJS +=3D scsi_merge.o scsi_proc.o scsi_dma.o
 else
   ifeq ($(CONFIG_SCSI),m)
     MIX_OBJS +=3D scsi_syms.o
@@ -722,10 +722,10 @@
=20
 scsi_mod.o: $(MIX_OBJS) hosts.o scsi.o scsi_ioctl.o constants.o \
 		scsicam.o scsi_proc.o scsi_error.o scsi_obsolete.o \
-		scsi_queue.o scsi_lib.o scsi_merge.o
+		scsi_queue.o scsi_lib.o scsi_merge.o scsi_dma.o
 	$(LD) $(LD_RFLAG) -r -o $@ $(MIX_OBJS) hosts.o scsi.o scsi_ioctl.o \
 		constants.o scsicam.o scsi_proc.o scsi_merge.o     \
-		scsi_error.o scsi_obsolete.o scsi_queue.o scsi_lib.o
+		scsi_error.o scsi_obsolete.o scsi_queue.o scsi_lib.o scsi_dma.o
=20
 sr_mod.o: sr.o sr_ioctl.o sr_vendor.o
 	$(LD) $(LD_RFLAG) -r -o $@ sr.o sr_ioctl.o sr_vendor.o
Index: linux/drivers/scsi/gdth.c
diff -u linux/drivers/scsi/gdth.c:1.1.1.1 linux/drivers/scsi/gdth.c:1.2
--- linux/drivers/scsi/gdth.c:1.1.1.1	Mon Jan  3 14:27:56 2000
+++ linux/drivers/scsi/gdth.c	Mon Jan  3 18:31:13 2000
@@ -3157,7 +3157,7 @@
             NUMDATA(shp)->busnum=3D 0;
=20
             ha->pccb =3D CMDDATA(shp);
-            ha->pscratch =3D scsi_init_malloc(GDTH_SCRATCH, GFP_ATOMIC =
| GFP_DMA);
+            ha->pscratch =3D (void *) __get_free_pages(GFP_ATOMIC | =
GFP_DMA, GDTH_SCRATCH_ORD);
             ha->scratch_busy =3D FALSE;
             ha->req_first =3D NULL;
             ha->tid_cnt =3D MAX_HDRIVES;
@@ -3172,7 +3172,7 @@
                 --gdth_ctr_count;
                 --gdth_ctr_vcount;
                 if (ha->pscratch !=3D NULL)
-                    scsi_init_free((void *)ha->pscratch, GDTH_SCRATCH);
+                    free_pages((unsigned long)ha->pscratch, =
GDTH_SCRATCH_ORD);
                 free_irq(ha->irq,NULL);
                 scsi_unregister(shp);
                 continue;
@@ -3223,7 +3223,7 @@
                     NUMDATA(shp)->hanum));
=20
             ha->pccb =3D CMDDATA(shp);
-            ha->pscratch =3D scsi_init_malloc(GDTH_SCRATCH, GFP_ATOMIC =
| GFP_DMA);
+            ha->pscratch =3D (void *) __get_free_pages(GFP_ATOMIC | =
GFP_DMA, GDTH_SCRATCH_ORD);
             ha->scratch_busy =3D FALSE;
             ha->req_first =3D NULL;
             ha->tid_cnt =3D MAX_HDRIVES;
@@ -3238,7 +3238,7 @@
                 --gdth_ctr_count;
                 --gdth_ctr_vcount;
                 if (ha->pscratch !=3D NULL)
-                    scsi_init_free((void *)ha->pscratch, GDTH_SCRATCH);
+                    free_pages((unsigned long)ha->pscratch, =
GDTH_SCRATCH_ORD);
                 free_irq(ha->irq,NULL);
                 scsi_unregister(shp);
                 continue;
@@ -3293,7 +3293,7 @@
             NUMDATA(shp)->busnum=3D 0;
=20
             ha->pccb =3D CMDDATA(shp);
-            ha->pscratch =3D scsi_init_malloc(GDTH_SCRATCH, GFP_ATOMIC =
| GFP_DMA);
+            ha->pscratch =3D (void *) __get_free_pages(GFP_ATOMIC | =
GFP_DMA, GDTH_SCRATCH_ORD);
             ha->scratch_busy =3D FALSE;
             ha->req_first =3D NULL;
             ha->tid_cnt =3D pcistr[ctr].device_id >=3D 0x200 ? MAXID : =
MAX_HDRIVES;
@@ -3308,7 +3308,7 @@
                 --gdth_ctr_count;
                 --gdth_ctr_vcount;
                 if (ha->pscratch !=3D NULL)
-                    scsi_init_free((void *)ha->pscratch, GDTH_SCRATCH);
+                    free_pages((unsigned long)ha->pscratch, =
GDTH_SCRATCH_ORD);
                 free_irq(ha->irq,NULL);
                 scsi_unregister(shp);
                 continue;
@@ -3359,7 +3359,7 @@
         if (shp->dma_channel !=3D 0xff) {
             free_dma(shp->dma_channel);
         }
-        scsi_init_free((void *)ha->pscratch, GDTH_SCRATCH);
+        free_pages((unsigned long)ha->pscratch, GDTH_SCRATCH_ORD);
         gdth_ctr_released++;
         TRACE2(("gdth_release(): HA %d of %d\n",=20
                 gdth_ctr_released, gdth_ctr_count));
@@ -3561,22 +3561,19 @@
 {
     int             i;
     gdth_ha_str     *ha;
-    Scsi_Cmnd       scp;
-    Scsi_Device     sdev;
+    Scsi_Cmnd     * scp;
+    Scsi_Device   * sdev;
     gdth_cmd_str    gdtcmd;
=20
     TRACE2(("gdth_flush() hanum %d\n",hanum));
     ha =3D HADATA(gdth_ctr_tab[hanum]);
-    memset(&sdev,0,sizeof(Scsi_Device));
-    memset(&scp, 0,sizeof(Scsi_Cmnd));
-    sdev.host =3D gdth_ctr_tab[hanum];
-    sdev.id =3D sdev.host->this_id;
-    scp.cmd_len =3D 12;
-    scp.host =3D gdth_ctr_tab[hanum];
-    scp.target =3D sdev.host->this_id;
-    scp.device =3D &sdev;
-    scp.use_sg =3D 0;
=20
+    sdev =3D scsi_get_host_dev(gdth_ctr_tab[hanum]);
+    scp  =3D scsi_allocate_device(sdev, 1, FALSE);
+
+    scp->cmd_len =3D 12;
+    scp->use_sg =3D 0;
+
     for (i =3D 0; i < MAX_HDRIVES; ++i) {
         if (ha->hdr[i].present) {
             gdtcmd.BoardNode =3D LOCALBOARD;
@@ -3586,9 +3583,11 @@
             gdtcmd.u.cache.BlockNo =3D 1;
             gdtcmd.u.cache.sg_canz =3D 0;
             TRACE2(("gdth_flush(): flush ha %d drive %d\n", hanum, i));
-            gdth_do_cmd(&scp, &gdtcmd, 30);
+            gdth_do_cmd(scp, &gdtcmd, 30);
         }
     }
+    scsi_release_command(scp);
+    scsi_free_host_dev(sdev);
 }
=20
 /* shutdown routine */
@@ -3596,8 +3595,8 @@
 {
     int             hanum;
 #ifndef __alpha__
-    Scsi_Cmnd       scp;
-    Scsi_Device     sdev;
+    Scsi_Cmnd     * scp;
+    Scsi_Device   * sdev;
     gdth_cmd_str    gdtcmd;
 #endif
=20
@@ -3610,23 +3609,21 @@
=20
 #ifndef __alpha__
         /* controller reset */
-        memset(&sdev,0,sizeof(Scsi_Device));
-        memset(&scp, 0,sizeof(Scsi_Cmnd));
-        sdev.host =3D gdth_ctr_tab[hanum];
-        sdev.id =3D sdev.host->this_id;
-        scp.cmd_len =3D 12;
-        scp.host =3D gdth_ctr_tab[hanum];
-        scp.target =3D sdev.host->this_id;
-        scp.device =3D &sdev;
-        scp.use_sg =3D 0;
+	sdev =3D scsi_get_host_dev(gdth_ctr_tab[hanum]);
+	scp  =3D scsi_allocate_device(sdev, 1, FALSE);
+        scp->cmd_len =3D 12;
+        scp->use_sg =3D 0;
=20
         gdtcmd.BoardNode =3D LOCALBOARD;
         gdtcmd.Service =3D CACHESERVICE;
         gdtcmd.OpCode =3D GDT_RESET;
         TRACE2(("gdth_halt(): reset controller %d\n", hanum));
-        gdth_do_cmd(&scp, &gdtcmd, 10);
+        gdth_do_cmd(scp, &gdtcmd, 10);
+	scsi_release_command(scp);
+	scsi_free_host_dev(sdev);
 #endif
     }
+
     printk("Done.\n");
=20
 #ifdef GDTH_STATISTICS
Index: linux/drivers/scsi/gdth.h
diff -u linux/drivers/scsi/gdth.h:1.1.1.1 linux/drivers/scsi/gdth.h:1.2
--- linux/drivers/scsi/gdth.h:1.1.1.1	Mon Jan  3 14:27:56 2000
+++ linux/drivers/scsi/gdth.h	Mon Jan  3 18:31:14 2000
@@ -126,7 +126,8 @@
 #endif
=20
 /* limits */
-#define GDTH_SCRATCH    4096                    /* 4KB scratch buffer =
*/
+#define GDTH_SCRATCH    PAGE_SIZE                    /* 4KB scratch =
buffer */
+#define GDTH_SCRATCH_ORD 0                      /* order 0 means 1 page =
*/
 #define GDTH_MAXCMDS    124
 #define GDTH_MAXC_P_L   16                      /* max. cmds per lun */
 #define GDTH_MAX_RAW    2                       /* max. cmds per raw =
device */
Index: linux/drivers/scsi/gdth_proc.c
diff -u linux/drivers/scsi/gdth_proc.c:1.1.1.1 =
linux/drivers/scsi/gdth_proc.c:1.3
--- linux/drivers/scsi/gdth_proc.c:1.1.1.1	Mon Jan  3 14:45:15 2000
+++ linux/drivers/scsi/gdth_proc.c	Mon Jan 10 21:44:27 2000
@@ -31,22 +31,17 @@
 static int gdth_set_info(char *buffer,int length,int vh,int hanum,int =
busnum)
 {
     int             ret_val;
-    Scsi_Cmnd       scp;
-    Scsi_Device     sdev;
+    Scsi_Cmnd     * scp;
+    Scsi_Device   * sdev;
     gdth_iowr_str   *piowr;
=20
     TRACE2(("gdth_set_info() ha %d bus %d\n",hanum,busnum));
     piowr =3D (gdth_iowr_str *)buffer;
=20
-    memset(&sdev,0,sizeof(Scsi_Device));
-    memset(&scp, 0,sizeof(Scsi_Cmnd));
-    sdev.host =3D gdth_ctr_vtab[vh];
-    sdev.id =3D sdev.host->this_id;
-    scp.cmd_len =3D 12;
-    scp.host =3D gdth_ctr_vtab[vh];
-    scp.target =3D sdev.host->this_id;
-    scp.device =3D &sdev;
-    scp.use_sg =3D 0;
+    sdev =3D scsi_get_host_dev(gdth_ctr_vtab[vh]);
+    scp  =3D scsi_allocate_device(sdev, 1, FALSE);
+    scp->cmd_len =3D 12;
+    scp->use_sg =3D 0;
=20
     if (length >=3D 4) {
         if (strncmp(buffer,"gdth",4) =3D=3D 0) {
@@ -62,10 +57,14 @@
     } else {
         ret_val =3D -EINVAL;
     }
+
+    scsi_release_command(scp);
+    scsi_free_host_dev(sdev);
+
     return ret_val;
 }
         =20
-static int gdth_set_asc_info(char *buffer,int length,int =
hanum,Scsi_Cmnd scp)
+static int gdth_set_asc_info(char *buffer,int length,int =
hanum,Scsi_Cmnd * scp)
 {
     int             orig_length, drive, wb_mode;
     int             i, found;
@@ -105,7 +104,7 @@
                 gdtcmd.u.cache.DeviceNo =3D i;
                 gdtcmd.u.cache.BlockNo =3D 1;
                 gdtcmd.u.cache.sg_canz =3D 0;
-                gdth_do_cmd(&scp, &gdtcmd, 30);
+                gdth_do_cmd(scp, &gdtcmd, 30);
             }
         }
         if (!found)
@@ -158,7 +157,7 @@
         gdtcmd.u.ioctl.subfunc =3D CACHE_CONFIG;
         gdtcmd.u.ioctl.channel =3D INVALID_CHANNEL;
         pcpar->write_back =3D wb_mode=3D=3D1 ? 0:1;
-        gdth_do_cmd(&scp, &gdtcmd, 30);
+        gdth_do_cmd(scp, &gdtcmd, 30);
         gdth_ioctl_free(hanum);
         printk("Done.\n");
         return(orig_length);
@@ -168,7 +167,7 @@
     return(-EINVAL);
 }
=20
-static int gdth_set_bin_info(char *buffer,int length,int =
hanum,Scsi_Cmnd scp)
+static int gdth_set_bin_info(char *buffer,int length,int =
hanum,Scsi_Cmnd * scp)
 {
     unchar          i, j;
     gdth_ha_str     *ha;
@@ -241,8 +240,8 @@
             *ppadd2 =3D virt_to_bus(piord->iu.general.data+add_size);
         }
         /* do IOCTL */
-        gdth_do_cmd(&scp, pcmd, piowr->timeout);
-        piord->status =3D (ulong32)scp.SCp.Message;
+        gdth_do_cmd(scp, pcmd, piowr->timeout);
+        piord->status =3D (ulong32)scp->SCp.Message;
         break;
=20
       case GDTIOCTL_DRVERS:
@@ -401,8 +400,8 @@
=20
     gdth_cmd_str gdtcmd;
     gdth_evt_str estr;
-    Scsi_Cmnd scp;
-    Scsi_Device sdev;
+    Scsi_Cmnd  * scp;
+    Scsi_Device *sdev;
     char hrec[161];
     struct timeval tv;
=20
@@ -417,15 +416,10 @@
     ha =3D HADATA(gdth_ctr_tab[hanum]);
     id =3D length;
=20
-    memset(&sdev,0,sizeof(Scsi_Device));
-    memset(&scp, 0,sizeof(Scsi_Cmnd));
-    sdev.host =3D gdth_ctr_vtab[vh];
-    sdev.id =3D sdev.host->this_id;
-    scp.cmd_len =3D 12;
-    scp.host =3D gdth_ctr_vtab[vh];
-    scp.target =3D sdev.host->this_id;
-    scp.device =3D &sdev;
-    scp.use_sg =3D 0;
+    sdev =3D scsi_get_host_dev(gdth_ctr_vtab[vh]);
+    scp  =3D scsi_allocate_device(sdev, 1, FALSE);
+    scp->cmd_len =3D 12;
+    scp->use_sg =3D 0;
=20
     /* look for buffer ID in length */
     if (id > 1) {
@@ -531,11 +525,11 @@
                     sizeof(pds->list[0]);
                 if (pds->entries > cnt)
                     pds->entries =3D cnt;
-                gdth_do_cmd(&scp, &gdtcmd, 30);
-                if (scp.SCp.Message !=3D S_OK)=20
+                gdth_do_cmd(scp, &gdtcmd, 30);
+                if (scp->SCp.Message !=3D S_OK)=20
                     pds->count =3D 0;
                 TRACE2(("pdr_statistics() entries %d status %d\n",
-                        pds->count, scp.SCp.Message));
+                        pds->count, scp->SCp.Message));
=20
                 /* other IOCTLs must fit into area GDTH_SCRATCH/4 */
                 for (j =3D 0; j < ha->raw[i].pdev_cnt; ++j) {
@@ -551,8 +545,8 @@
                     gdtcmd.u.ioctl.subfunc =3D SCSI_DR_INFO | =
L_CTRL_PATTERN;
                     gdtcmd.u.ioctl.channel =3D=20
                         ha->raw[i].address | ha->raw[i].id_list[j];
-                    gdth_do_cmd(&scp, &gdtcmd, 30);
-                    if (scp.SCp.Message =3D=3D S_OK) {
+                    gdth_do_cmd(scp, &gdtcmd, 30);
+                    if (scp->SCp.Message =3D=3D S_OK) {
                         strncpy(hrec,pdi->vendor,8);
                         strncpy(hrec+8,pdi->product,16);
                         strncpy(hrec+24,pdi->revision,4);
@@ -602,8 +596,8 @@
                         gdtcmd.u.ioctl.channel =3D=20
                             ha->raw[i].address | ha->raw[i].id_list[j];
                         pdef->sddc_type =3D 0x08;
-                        gdth_do_cmd(&scp, &gdtcmd, 30);
-                        if (scp.SCp.Message =3D=3D S_OK) {
+                        gdth_do_cmd(scp, &gdtcmd, 30);
+                        if (scp->SCp.Message =3D=3D S_OK) {
                             size =3D sprintf(buffer+len,
                                            " Grown Defects:\t%d\n",
                                            pdef->sddc_cnt);
@@ -649,8 +643,8 @@
                     gdtcmd.u.ioctl.param_size =3D =
sizeof(gdth_cdrinfo_str);
                     gdtcmd.u.ioctl.subfunc =3D CACHE_DRV_INFO;
                     gdtcmd.u.ioctl.channel =3D drv_no;
-                    gdth_do_cmd(&scp, &gdtcmd, 30);
-                    if (scp.SCp.Message !=3D S_OK)
+                    gdth_do_cmd(scp, &gdtcmd, 30);
+                    if (scp->SCp.Message !=3D S_OK)
                         break;
                     pcdi->ld_dtype >>=3D 16;
                     j++;
@@ -746,8 +740,8 @@
                 gdtcmd.u.ioctl.param_size =3D =
sizeof(gdth_arrayinf_str);
                 gdtcmd.u.ioctl.subfunc =3D ARRAY_INFO | =
LA_CTRL_PATTERN;
                 gdtcmd.u.ioctl.channel =3D i;
-                gdth_do_cmd(&scp, &gdtcmd, 30);
-                if (scp.SCp.Message =3D=3D S_OK) {
+                gdth_do_cmd(scp, &gdtcmd, 30);
+                if (scp->SCp.Message =3D=3D S_OK) {
                     if (pai->ai_state =3D=3D 0)
                         strcpy(hrec, "idle");
                     else if (pai->ai_state =3D=3D 2)
@@ -821,8 +815,8 @@
                 gdtcmd.u.ioctl.channel =3D i;
                 phg->entries =3D MAX_HDRIVES;
                 phg->offset =3D GDTOFFSOF(gdth_hget_str, entry[0]);=20
-                gdth_do_cmd(&scp, &gdtcmd, 30);
-                if (scp.SCp.Message !=3D S_OK) {
+                gdth_do_cmd(scp, &gdtcmd, 30);
+                if (scp->SCp.Message !=3D S_OK) {
                     ha->hdr[i].ldr_no =3D i;
                     ha->hdr[i].rw_attribs =3D 0;
                     ha->hdr[i].start_sec =3D 0;
@@ -837,7 +831,7 @@
                     }
                 }
                 TRACE2(("host_get entries %d status %d\n",
-                        phg->entries, scp.SCp.Message));
+                        phg->entries, scp->SCp.Message));
             }
             gdth_ioctl_free(hanum);
=20
@@ -915,6 +909,10 @@
     }
=20
 stop_output:
+
+    scsi_release_command(scp);
+    scsi_free_host_dev(sdev);
+
     *start =3D buffer +(offset-begin);
     len -=3D (offset-begin);
     if (len > length)
Index: linux/drivers/scsi/gdth_proc.h
diff -u linux/drivers/scsi/gdth_proc.h:1.1.1.1 =
linux/drivers/scsi/gdth_proc.h:1.2
--- linux/drivers/scsi/gdth_proc.h:1.1.1.1	Mon Jan  3 15:22:51 2000
+++ linux/drivers/scsi/gdth_proc.h	Mon Jan  3 18:31:14 2000
@@ -6,8 +6,8 @@
  */
=20
 static int gdth_set_info(char *buffer,int length,int vh,int hanum,int =
busnum);
-static int gdth_set_asc_info(char *buffer,int length,int =
hanum,Scsi_Cmnd scp);
-static int gdth_set_bin_info(char *buffer,int length,int =
hanum,Scsi_Cmnd scp);
+static int gdth_set_asc_info(char *buffer,int length,int =
hanum,Scsi_Cmnd * scp);
+static int gdth_set_bin_info(char *buffer,int length,int =
hanum,Scsi_Cmnd * scp);
 static int gdth_get_info(char *buffer,char **start,off_t offset,
                          int length,int vh,int hanum,int busnum);
=20
Index: linux/drivers/scsi/hosts.c
diff -u linux/drivers/scsi/hosts.c:1.1.1.2 =
linux/drivers/scsi/hosts.c:1.2
--- linux/drivers/scsi/hosts.c:1.1.1.2	Sat Dec 18 18:33:45 1999
+++ linux/drivers/scsi/hosts.c	Mon Jan 10 21:44:27 2000
@@ -869,7 +869,8 @@
     printk ("scsi : %d host%s.\n", next_scsi_host,
 	    (next_scsi_host =3D=3D 1) ? "" : "s");
    =20
-   =20
+    scsi_make_blocked_list();
+       =20
     /* Now attach the high level drivers */
 #ifdef CONFIG_BLK_DEV_SD
     scsi_register_device(&sd_template);
Index: linux/drivers/scsi/hosts.h
diff -u linux/drivers/scsi/hosts.h:1.1.1.4 =
linux/drivers/scsi/hosts.h:1.4
--- linux/drivers/scsi/hosts.h:1.1.1.4	Fri Jan  7 22:33:08 2000
+++ linux/drivers/scsi/hosts.h	Mon Jan 10 21:44:27 2000
@@ -334,6 +334,13 @@
     unsigned int max_lun;
     unsigned int max_channel;
=20
+    /*
+     * Pointer to a circularly linked list - this indicates the hosts
+     * that should be locked out of performing I/O while we have an =
active
+     * command on this host.
+     */
+    struct Scsi_Host * block;
+    unsigned wish_block:1;
=20
     /* These parameters should be set by the detect routine */
     unsigned long base;
Index: linux/drivers/scsi/scsi.c
diff -u linux/drivers/scsi/scsi.c:1.1.1.7 linux/drivers/scsi/scsi.c:1.12
--- linux/drivers/scsi/scsi.c:1.1.1.7	Fri Jan  7 22:33:08 2000
+++ linux/drivers/scsi/scsi.c	Mon Jan 10 22:17:27 2000
@@ -86,21 +86,6 @@
  * Definitions and constants.
  */
=20
-/*
- * PAGE_SIZE must be a multiple of the sector size (512).  True
- * for all reasonably recent architectures (even the VAX...).
- */
-#define SECTOR_SIZE		512
-#define SECTORS_PER_PAGE	(PAGE_SIZE/SECTOR_SIZE)
-
-#if SECTORS_PER_PAGE <=3D 8
-typedef unsigned char FreeSectorBitmap;
-#elif SECTORS_PER_PAGE <=3D 32
-typedef unsigned int FreeSectorBitmap;
-#else
-#error You lose.
-#endif
-
 #define MIN_RESET_DELAY (2*HZ)
=20
 /* Do not call reset on error if we just did a reset within 15 sec. */
@@ -139,12 +124,6 @@
 static unsigned long serial_number =3D 0;
 static Scsi_Cmnd *scsi_bh_queue_head =3D NULL;
 static Scsi_Cmnd *scsi_bh_queue_tail =3D NULL;
-static FreeSectorBitmap *dma_malloc_freelist =3D NULL;
-static int need_isa_bounce_buffers;
-static unsigned int dma_sectors =3D 0;
-unsigned int scsi_dma_free_sectors =3D 0;
-unsigned int scsi_need_isa_buffer =3D 0;
-static unsigned char **dma_malloc_pages =3D NULL;
=20
 /*
  * Note - the initial logging level can be set here to log events at =
boot time.
@@ -173,7 +152,6 @@
 /*=20
  * Function prototypes.
  */
-static void resize_dma_pool(void);
 static void print_inquiry(unsigned char *data);
 extern void scsi_times_out(Scsi_Cmnd * SCpnt);
 static int scan_scsis_single(int channel, int dev, int lun, int =
*max_scsi_dev,
@@ -290,6 +268,47 @@
 	{NULL, NULL, NULL}
 };
=20
+
+/*
+ * Function:    scsi_get_request_handler()
+ *
+ * Purpose:     Selects queue handler function for a device.
+ *
+ * Arguments:   SDpnt   - device for which we need a handler function.
+ *
+ * Returns:     Nothing
+ *
+ * Lock status: No locking assumed or required.
+ *
+ * Notes:       Most devices will end up using scsi_request_fn for the
+ *              handler function (at least as things are done now).
+ *              The "block" feature basically ensures that only one of
+ *              the blocked hosts is active at one time, mainly to work =
around
+ *              buggy DMA chipsets where the memory gets starved.
+ *              For this case, we have a special handler function, =
which
+ *              does some checks and ultimately calls scsi_request_fn.
+ *
+ *              As a future enhancement, it might be worthwhile to add =
support
+ *              for stacked handlers - there might get to be too many =
permutations
+ *              otherwise.  Then again, we might just have one handler =
that does
+ *              all of the special cases (a little bit slower), and =
those devices
+ *              that don't need the special case code would directly =
call=20
+ *              scsi_request_fn.
+ *
+ *              As it stands, I can think of a number of special cases =
that
+ *              we might need to handle.  This would not only include =
the blocked
+ *              case, but single_lun (for changers), and any special =
handling
+ *              we might need for a spun-down disk to spin it back up =
again.
+ */
+static request_fn_proc * scsi_get_request_handler(Scsi_Device * SDpnt, =
struct Scsi_Host * SHpnt) {
+#ifdef CONFIG_SCSI_HOST_BLOCK
+        if( SHpnt->wish_block ) {
+                return scsi_blocked_request_fn;
+        }
+#endif
+        return scsi_request_fn;
+}
+
 static int get_device_flags(unsigned char *response_data)
 {
 	int i =3D 0;
@@ -314,7 +333,6 @@
 	return 0;
 }
=20
-
 static void scan_scsis_done(Scsi_Cmnd * SCpnt)
 {
=20
@@ -437,7 +455,7 @@
 			 * the queue actually represents.   We could look it up, but it
 			 * is pointless work.
 			 */
-			blk_init_queue(&SDpnt->request_queue, scsi_request_fn);
+			blk_init_queue(&SDpnt->request_queue, =
scsi_get_request_handler(SDpnt, shpnt));
 			blk_queue_headactive(&SDpnt->request_queue, 0);
 			SDpnt->request_queue.queuedata =3D (void *) SDpnt;
 			/* Make sure we have something that is valid for DMA purposes */
@@ -520,7 +538,7 @@
 					}
 				}
 			}
-			resize_dma_pool();
+			scsi_resize_dma_pool();
=20
 			for (sdtpnt =3D scsi_devicelist; sdtpnt; sdtpnt =3D sdtpnt->next) {
 				if (sdtpnt->finish && sdtpnt->nr_dev) {
@@ -882,7 +900,7 @@
 	 * the queue actually represents.   We could look it up, but it
 	 * is pointless work.
 	 */
-	blk_init_queue(&SDpnt->request_queue, scsi_request_fn);
+	blk_init_queue(&SDpnt->request_queue, scsi_get_request_handler(SDpnt, =
shpnt));
 	blk_queue_headactive(&SDpnt->request_queue, 0);
 	SDpnt->request_queue.queuedata =3D (void *) SDpnt;
 	SDpnt->host =3D shpnt;
@@ -990,11 +1008,6 @@
 static spinlock_t device_request_lock =3D SPIN_LOCK_UNLOCKED;
=20
 /*
- * Used for access to internal allocator used for DMA safe buffers.
- */
-static spinlock_t allocator_request_lock =3D SPIN_LOCK_UNLOCKED;
-
-/*
  * Used to protect insertion into and removal from the queue of
  * commands to be processed by the bottom half handler.
  */
@@ -1810,127 +1823,6 @@
 static void scsi_unregister_host(Scsi_Host_Template *);
 #endif
=20
-/*
- * Function:    scsi_malloc
- *
- * Purpose:     Allocate memory from the DMA-safe pool.
- *
- * Arguments:   len       - amount of memory we need.
- *
- * Lock status: No locks assumed to be held.  This function is =
SMP-safe.
- *
- * Returns:     Pointer to memory block.
- *
- * Notes:       Prior to the new queue code, this function was not =
SMP-safe.
- *              This function can only allocate in units of sectors
- *              (i.e. 512 bytes).
- *
- *              We cannot use the normal system allocator becuase we =
need
- *              to be able to guarantee that we can process a complete =
disk
- *              I/O request without touching the system allocator.  =
Think
- *              about it - if the system were heavily swapping, and =
tried to
- *              write out a block of memory to disk, and the SCSI code =
needed
- *              to allocate more memory in order to be able to write =
the
- *              data to disk, you would wedge the system.
- */
-void *scsi_malloc(unsigned int len)
-{
-	unsigned int nbits, mask;
-	unsigned long flags;
-
-	int i, j;
-	if (len % SECTOR_SIZE !=3D 0 || len > PAGE_SIZE)
-		return NULL;
-
-	nbits =3D len >> 9;
-	mask =3D (1 << nbits) - 1;
-
-	spin_lock_irqsave(&allocator_request_lock, flags);
-
-	for (i =3D 0; i < dma_sectors / SECTORS_PER_PAGE; i++)
-		for (j =3D 0; j <=3D SECTORS_PER_PAGE - nbits; j++) {
-			if ((dma_malloc_freelist[i] & (mask << j)) =3D=3D 0) {
-				dma_malloc_freelist[i] |=3D (mask << j);
-				scsi_dma_free_sectors -=3D nbits;
-#ifdef DEBUG
-				SCSI_LOG_MLQUEUE(3, printk("SMalloc: %d %p [From:%p]\n", len, =
dma_malloc_pages[i] + (j << 9)));
-				printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j =
<< 9));
-#endif
-				spin_unlock_irqrestore(&allocator_request_lock, flags);
-				return (void *) ((unsigned long) dma_malloc_pages[i] + (j << 9));
-			}
-		}
-	spin_unlock_irqrestore(&allocator_request_lock, flags);
-	return NULL;		/* Nope.  No more */
-}
-
-/*
- * Function:    scsi_free
- *
- * Purpose:     Free memory into the DMA-safe pool.
- *
- * Arguments:   ptr       - data block we are freeing.
- *              len       - size of block we are freeing.
- *
- * Lock status: No locks assumed to be held.  This function is =
SMP-safe.
- *
- * Returns:     Nothing
- *
- * Notes:       This function *must* only be used to free memory
- *              allocated from scsi_malloc().
- *
- *              Prior to the new queue code, this function was not =
SMP-safe.
- *              This function can only allocate in units of sectors
- *              (i.e. 512 bytes).
- */
-int scsi_free(void *obj, unsigned int len)
-{
-	unsigned int page, sector, nbits, mask;
-	unsigned long flags;
-
-#ifdef DEBUG
-	unsigned long ret =3D 0;
-
-#ifdef __mips__
-	__asm__ __volatile__("move\t%0,$31":"=3Dr"(ret));
-#else
-	ret =3D __builtin_return_address(0);
-#endif
-	printk("scsi_free %p %d\n", obj, len);
-	SCSI_LOG_MLQUEUE(3, printk("SFree: %p %d\n", obj, len));
-#endif
-
-	spin_lock_irqsave(&allocator_request_lock, flags);
-
-	for (page =3D 0; page < dma_sectors / SECTORS_PER_PAGE; page++) {
-		unsigned long page_addr =3D (unsigned long) dma_malloc_pages[page];
-		if ((unsigned long) obj >=3D page_addr &&
-		    (unsigned long) obj < page_addr + PAGE_SIZE) {
-			sector =3D (((unsigned long) obj) - page_addr) >> 9;
-
-			nbits =3D len >> 9;
-			mask =3D (1 << nbits) - 1;
-
-			if ((mask << sector) >=3D (1 << SECTORS_PER_PAGE))
-				panic("scsi_free:Bad memory alignment");
-
-			if ((dma_malloc_freelist[page] &
-			     (mask << sector)) !=3D (mask << sector)) {
-#ifdef DEBUG
-				printk("scsi_free(obj=3D%p, len=3D%d) called from %08lx\n",
-				       obj, len, ret);
-#endif
-				panic("scsi_free:Trying to free unused memory");
-			}
-			scsi_dma_free_sectors +=3D nbits;
-			dma_malloc_freelist[page] &=3D ~(mask << sector);
-			spin_unlock_irqrestore(&allocator_request_lock, flags);
-			return 0;
-		}
-	}
-	panic("scsi_free:Bad offset");
-}
-
=20
 int scsi_loadable_module_flag;	/* Set after we scan builtin drivers */
=20
@@ -2114,7 +2006,7 @@
 	/*
 	 * This should build the DMA pool.
 	 */
-	resize_dma_pool();
+	scsi_resize_dma_pool();
=20
 	/*
 	 * OK, now we finish the initialization by doing spin-up, read
@@ -2465,217 +2357,6 @@
 }
 #endif
=20
-/*
- * Function:    resize_dma_pool
- *
- * Purpose:     Ensure that the DMA pool is sufficiently large to be
- *              able to guarantee that we can always process I/O =
requests
- *              without calling the system allocator.
- *
- * Arguments:   None.
- *
- * Lock status: No locks assumed to be held.  This function is =
SMP-safe.
- *
- * Returns:     Nothing
- *
- * Notes:       Prior to the new queue code, this function was not =
SMP-safe.
- *              Go through the device list and recompute the most =
appropriate
- *              size for the dma pool.  Then grab more memory (as =
required).
- */
-static void resize_dma_pool(void)
-{
-	int i, k;
-	unsigned long size;
-	unsigned long flags;
-	struct Scsi_Host *shpnt;
-	struct Scsi_Host *host =3D NULL;
-	Scsi_Device *SDpnt;
-	FreeSectorBitmap *new_dma_malloc_freelist =3D NULL;
-	unsigned int new_dma_sectors =3D 0;
-	unsigned int new_need_isa_buffer =3D 0;
-	unsigned char **new_dma_malloc_pages =3D NULL;
-	int out_of_space =3D 0;
-
-	spin_lock_irqsave(&allocator_request_lock, flags);
-
-	if (!scsi_hostlist) {
-		/*
-		 * Free up the DMA pool.
-		 */
-		if (scsi_dma_free_sectors !=3D dma_sectors)
-			panic("SCSI DMA pool memory leak %d %d\n", scsi_dma_free_sectors, =
dma_sectors);
-
-		for (i =3D 0; i < dma_sectors / SECTORS_PER_PAGE; i++)
-			scsi_init_free(dma_malloc_pages[i], PAGE_SIZE);
-		if (dma_malloc_pages)
-			scsi_init_free((char *) dma_malloc_pages,
-				       (dma_sectors / SECTORS_PER_PAGE) * =
sizeof(*dma_malloc_pages));
-		dma_malloc_pages =3D NULL;
-		if (dma_malloc_freelist)
-			scsi_init_free((char *) dma_malloc_freelist,
-				       (dma_sectors / SECTORS_PER_PAGE) * =
sizeof(*dma_malloc_freelist));
-		dma_malloc_freelist =3D NULL;
-		dma_sectors =3D 0;
-		scsi_dma_free_sectors =3D 0;
-		spin_unlock_irqrestore(&allocator_request_lock, flags);
-		return;
-	}
-	/* Next, check to see if we need to extend the DMA buffer pool */
-
-	new_dma_sectors =3D 2 * SECTORS_PER_PAGE;		/* Base value we use */
-
-	if (__pa(high_memory) - 1 > ISA_DMA_THRESHOLD)
-		need_isa_bounce_buffers =3D 1;
-	else
-		need_isa_bounce_buffers =3D 0;
-
-	if (scsi_devicelist)
-		for (shpnt =3D scsi_hostlist; shpnt; shpnt =3D shpnt->next)
-			new_dma_sectors +=3D SECTORS_PER_PAGE;	/* Increment for each host */
-
-	for (host =3D scsi_hostlist; host; host =3D host->next) {
-		for (SDpnt =3D host->host_queue; SDpnt; SDpnt =3D SDpnt->next) {
-			/*
-			 * sd and sr drivers allocate scatterlists.
-			 * sr drivers may allocate for each command 1x2048 or 2x1024 extra
-			 * buffers for 2k sector size and 1k fs.
-			 * sg driver allocates buffers < 4k.
-			 * st driver does not need buffers from the dma pool.
-			 * estimate 4k buffer/command for devices of unknown type (should =
panic).
-			 */
-			if (SDpnt->type =3D=3D TYPE_WORM || SDpnt->type =3D=3D TYPE_ROM ||
-			    SDpnt->type =3D=3D TYPE_DISK || SDpnt->type =3D=3D TYPE_MOD) {
-				new_dma_sectors +=3D ((host->sg_tablesize *
-				sizeof(struct scatterlist) + 511) >> 9) *
-				 SDpnt->queue_depth;
-				if (SDpnt->type =3D=3D TYPE_WORM || SDpnt->type =3D=3D TYPE_ROM)
-					new_dma_sectors +=3D (2048 >> 9) * SDpnt->queue_depth;
-			} else if (SDpnt->type =3D=3D TYPE_SCANNER ||
-				   SDpnt->type =3D=3D TYPE_PROCESSOR ||
-				   SDpnt->type =3D=3D TYPE_MEDIUM_CHANGER ||
-				   SDpnt->type =3D=3D TYPE_ENCLOSURE) {
-				new_dma_sectors +=3D (4096 >> 9) * SDpnt->queue_depth;
-			} else {
-				if (SDpnt->type !=3D TYPE_TAPE) {
-					printk("resize_dma_pool: unknown device type %d\n", SDpnt->type);
-					new_dma_sectors +=3D (4096 >> 9) * SDpnt->queue_depth;
-				}
-			}
-
-			if (host->unchecked_isa_dma &&
-			    need_isa_bounce_buffers &&
-			    SDpnt->type !=3D TYPE_TAPE) {
-				new_dma_sectors +=3D (PAGE_SIZE >> 9) * host->sg_tablesize *
-				    SDpnt->queue_depth;
-				new_need_isa_buffer++;
-			}
-		}
-	}
-
-#ifdef DEBUG_INIT
-	printk("resize_dma_pool: needed dma sectors =3D %d\n", =
new_dma_sectors);
-#endif
-
-	/* limit DMA memory to 32MB: */
-	new_dma_sectors =3D (new_dma_sectors + 15) & 0xfff0;
-
-	/*
-	 * We never shrink the buffers - this leads to
-	 * race conditions that I would rather not even think
-	 * about right now.
-	 */
-#if 0				/* Why do this? No gain and risks out_of_space */
-	if (new_dma_sectors < dma_sectors)
-		new_dma_sectors =3D dma_sectors;
-#endif
-	if (new_dma_sectors <=3D dma_sectors) {
-		spin_unlock_irqrestore(&allocator_request_lock, flags);
-		return;		/* best to quit while we are in front */
-        }
-
-	for (k =3D 0; k < 20; ++k) {	/* just in case */
-		out_of_space =3D 0;
-		size =3D (new_dma_sectors / SECTORS_PER_PAGE) *
-		    sizeof(FreeSectorBitmap);
-		new_dma_malloc_freelist =3D (FreeSectorBitmap *)
-		    scsi_init_malloc(size, GFP_ATOMIC);
-		if (new_dma_malloc_freelist) {
-			size =3D (new_dma_sectors / SECTORS_PER_PAGE) *
-			    sizeof(*new_dma_malloc_pages);
-			new_dma_malloc_pages =3D (unsigned char **)
-			    scsi_init_malloc(size, GFP_ATOMIC);
-			if (!new_dma_malloc_pages) {
-				size =3D (new_dma_sectors / SECTORS_PER_PAGE) *
-				    sizeof(FreeSectorBitmap);
-				scsi_init_free((char *) new_dma_malloc_freelist, size);
-				out_of_space =3D 1;
-			}
-		} else
-			out_of_space =3D 1;
-
-		if ((!out_of_space) && (new_dma_sectors > dma_sectors)) {
-			for (i =3D dma_sectors / SECTORS_PER_PAGE;
-			   i < new_dma_sectors / SECTORS_PER_PAGE; i++) {
-				new_dma_malloc_pages[i] =3D (unsigned char *)
-				    scsi_init_malloc(PAGE_SIZE, GFP_ATOMIC | GFP_DMA);
-				if (!new_dma_malloc_pages[i])
-					break;
-			}
-			if (i !=3D new_dma_sectors / SECTORS_PER_PAGE) {	/* clean up */
-				int k =3D i;
-
-				out_of_space =3D 1;
-				for (i =3D 0; i < k; ++i)
-					scsi_init_free(new_dma_malloc_pages[i], PAGE_SIZE);
-			}
-		}
-		if (out_of_space) {	/* try scaling down new_dma_sectors request */
-			printk("scsi::resize_dma_pool: WARNING, dma_sectors=3D%u, "
-			       "wanted=3D%u, scaling\n", dma_sectors, new_dma_sectors);
-			if (new_dma_sectors < (8 * SECTORS_PER_PAGE))
-				break;	/* pretty well hopeless ... */
-			new_dma_sectors =3D (new_dma_sectors * 3) / 4;
-			new_dma_sectors =3D (new_dma_sectors + 15) & 0xfff0;
-			if (new_dma_sectors <=3D dma_sectors)
-				break;	/* stick with what we have got */
-		} else
-			break;	/* found space ... */
-	}			/* end of for loop */
-	if (out_of_space) {
-		spin_unlock_irqrestore(&allocator_request_lock, flags);
-		scsi_need_isa_buffer =3D new_need_isa_buffer;	/* some useful info */
-		printk("      WARNING, not enough memory, pool not expanded\n");
-		return;
-	}
-	/* When we dick with the actual DMA list, we need to
-	 * protect things
-	 */
-	if (dma_malloc_freelist) {
-		size =3D (dma_sectors / SECTORS_PER_PAGE) * sizeof(FreeSectorBitmap);
-		memcpy(new_dma_malloc_freelist, dma_malloc_freelist, size);
-		scsi_init_free((char *) dma_malloc_freelist, size);
-	}
-	dma_malloc_freelist =3D new_dma_malloc_freelist;
-
-	if (dma_malloc_pages) {
-		size =3D (dma_sectors / SECTORS_PER_PAGE) * =
sizeof(*dma_malloc_pages);
-		memcpy(new_dma_malloc_pages, dma_malloc_pages, size);
-		scsi_init_free((char *) dma_malloc_pages, size);
-	}
-	scsi_dma_free_sectors +=3D new_dma_sectors - dma_sectors;
-	dma_malloc_pages =3D new_dma_malloc_pages;
-	dma_sectors =3D new_dma_sectors;
-	scsi_need_isa_buffer =3D new_need_isa_buffer;
-
-	spin_unlock_irqrestore(&allocator_request_lock, flags);
-
-#ifdef DEBUG_INIT
-	printk("resize_dma_pool: dma free sectors   =3D %d\n", =
scsi_dma_free_sectors);
-	printk("resize_dma_pool: dma sectors        =3D %d\n", dma_sectors);
-	printk("resize_dma_pool: need isa buffers   =3D %d\n", =
scsi_need_isa_buffer);
-#endif
-}
-
 #ifdef CONFIG_MODULES		/* a big #ifdef block... */
=20
 /*
@@ -2771,6 +2452,8 @@
 		printk("scsi : %d host%s.\n", next_scsi_host,
 		       (next_scsi_host =3D=3D 1) ? "" : "s");
=20
+		scsi_make_blocked_list();
+
 		/* The next step is to call scan_scsis here.  This generates the
 		 * Scsi_Devices entries
 		 */
@@ -2809,7 +2492,7 @@
 		 * Now that we have all of the devices, resize the DMA pool,
 		 * as required.  */
 		if (!out_of_space)
-			resize_dma_pool();
+			scsi_resize_dma_pool();
=20
=20
 		/* This does any final handling that is required. */
@@ -3027,7 +2710,7 @@
 	 * do the right thing and free everything.
 	 */
 	if (!scsi_hosts)
-		resize_dma_pool();
+		scsi_resize_dma_pool();
=20
 	printk("scsi : %d host%s.\n", next_scsi_host,
 	       (next_scsi_host =3D=3D 1) ? "" : "s");
@@ -3039,6 +2722,7 @@
 	       (scsi_memory_upper_value - scsi_init_memory_start) / 1024);
 #endif
=20
+	scsi_make_blocked_list();
=20
 	/* There were some hosts that were loaded at boot time, so we cannot
 	   do any more than this */
@@ -3122,7 +2806,7 @@
 	if (tpnt->finish && tpnt->nr_dev)
 		(*tpnt->finish) ();
 	if (!out_of_space)
-		resize_dma_pool();
+		scsi_resize_dma_pool();
 	MOD_INC_USE_COUNT;
=20
 	if (out_of_space) {
@@ -3372,39 +3056,11 @@
=20
 	scsi_loadable_module_flag =3D 1;
=20
-	dma_sectors =3D PAGE_SIZE / SECTOR_SIZE;
-	scsi_dma_free_sectors =3D dma_sectors;
-	/*
-	 * Set up a minimal DMA buffer list - this will be used during =
scan_scsis
-	 * in some cases.
-	 */
+        if( scsi_init_minimal_dma_pool() =3D=3D 0 )
+        {
+                return 1;
+        }
=20
-	/* One bit per sector to indicate free/busy */
-	size =3D (dma_sectors / SECTORS_PER_PAGE) * sizeof(FreeSectorBitmap);
-	dma_malloc_freelist =3D (FreeSectorBitmap *)
-	    scsi_init_malloc(size, GFP_ATOMIC);
-	if (dma_malloc_freelist) {
-		/* One pointer per page for the page list */
-		dma_malloc_pages =3D (unsigned char **) scsi_init_malloc(
-									      (dma_sectors / SECTORS_PER_PAGE) * =
sizeof(*dma_malloc_pages),
-							     GFP_ATOMIC);
-		if (dma_malloc_pages) {
-			dma_malloc_pages[0] =3D (unsigned char *)
-			    scsi_init_malloc(PAGE_SIZE, GFP_ATOMIC | GFP_DMA);
-			if (dma_malloc_pages[0])
-				has_space =3D 1;
-		}
-	}
-	if (!has_space) {
-		if (dma_malloc_freelist) {
-			scsi_init_free((char *) dma_malloc_freelist, size);
-			if (dma_malloc_pages)
-				scsi_init_free((char *) dma_malloc_pages,
-					       (dma_sectors / SECTORS_PER_PAGE) * =
sizeof(*dma_malloc_pages));
-		}
-		printk("scsi::init_module: failed, out of memory\n");
-		return 1;
-	}
 	/*
 	 * This is where the processing takes place for most everything
 	 * when commands are completed.
@@ -3427,7 +3083,7 @@
 	/*
 	 * Free up the DMA pool.
 	 */
-	resize_dma_pool();
+	scsi_resize_dma_pool();
=20
 }
=20
@@ -3481,7 +3137,7 @@
=20
         SDpnt->device_queue =3D SCpnt;
=20
-        blk_init_queue(&SDpnt->request_queue, scsi_request_fn);
+        blk_init_queue(&SDpnt->request_queue, =
scsi_get_request_handler(SDpnt, SDpnt->host));
         blk_queue_headactive(&SDpnt->request_queue, 0);
         SDpnt->request_queue.queuedata =3D (void *) SDpnt;
=20
@@ -3509,7 +3165,7 @@
  */
 void scsi_free_host_dev(Scsi_Device * SDpnt)
 {
-        if( SDpnt->id !=3D SDpnt->host->this_id )
+        if( (unsigned char) SDpnt->id !=3D (unsigned char) =
SDpnt->host->this_id )
         {
                 panic("Attempt to delete wrong device\n");
         }
Index: linux/drivers/scsi/scsi.h
diff -u linux/drivers/scsi/scsi.h:1.1.1.5 linux/drivers/scsi/scsi.h:1.6
--- linux/drivers/scsi/scsi.h:1.1.1.5	Fri Jan  7 22:33:08 2000
+++ linux/drivers/scsi/scsi.h	Mon Jan 10 21:44:27 2000
@@ -365,90 +365,121 @@
  *  Initializes all SCSI devices.  This scans all scsi busses.
  */
=20
-extern int scsi_dev_init(void);
-
-
-
-void *scsi_malloc(unsigned int);
-int scsi_free(void *, unsigned int);
 extern unsigned int scsi_logging_level;		/* What do we log? */
 extern unsigned int scsi_dma_free_sectors;	/* How much room do we have =
left */
 extern unsigned int scsi_need_isa_buffer;	/* True if some devices need =
indirection
 						   * buffers */
-extern void scsi_make_blocked_list(void);
 extern volatile int in_scan_scsis;
 extern const unsigned char scsi_command_size[8];
=20
+
 /*
  * These are the error handling functions defined in scsi_error.c
  */
+extern void scsi_times_out(Scsi_Cmnd * SCpnt);
 extern void scsi_add_timer(Scsi_Cmnd * SCset, int timeout,
 			   void (*complete) (Scsi_Cmnd *));
-extern void scsi_done(Scsi_Cmnd * SCpnt);
 extern int scsi_delete_timer(Scsi_Cmnd * SCset);
 extern void scsi_error_handler(void *host);
-extern int scsi_retry_command(Scsi_Cmnd *);
-extern void scsi_finish_command(Scsi_Cmnd *);
 extern int scsi_sense_valid(Scsi_Cmnd *);
 extern int scsi_decide_disposition(Scsi_Cmnd * SCpnt);
 extern int scsi_block_when_processing_errors(Scsi_Device *);
 extern void scsi_sleep(int);
+
+/*
+ * Prototypes for functions in scsicam.c
+ */
 extern int  scsi_partsize(struct buffer_head *bh, unsigned long =
capacity,
                     unsigned int *cyls, unsigned int *hds,
                     unsigned int *secs);
=20
 /*
+ * Prototypes for functions in scsi_dma.c
+ */
+void scsi_resize_dma_pool(void);
+int scsi_init_minimal_dma_pool(void);
+void *scsi_malloc(unsigned int);
+int scsi_free(void *, unsigned int);
+
+/*
  * Prototypes for functions in scsi_merge.c
  */
 extern void recount_segments(Scsi_Cmnd * SCpnt);
+extern void initialize_merge_fn(Scsi_Device * SDpnt);
=20
 /*
+ * Prototypes for functions in scsi_queue.c
+ */
+extern int scsi_mlqueue_insert(Scsi_Cmnd * cmd, int reason);
+
+/*
  * Prototypes for functions in scsi_lib.c
  */
-extern void initialize_merge_fn(Scsi_Device * SDpnt);
-extern void scsi_request_fn(request_queue_t * q);
+extern void scsi_blocked_request_fn(request_queue_t * q);
+extern Scsi_Cmnd *scsi_end_request(Scsi_Cmnd * SCpnt, int uptodate,
+				   int sectors);
+extern struct Scsi_Device_Template *scsi_get_request_dev(struct request =
*);
+extern int scsi_init_cmd_errh(Scsi_Cmnd * SCpnt);
+extern int scsi_insert_special_cmd(Scsi_Cmnd * SCpnt, int);
+extern void scsi_io_completion(Scsi_Cmnd * SCpnt, int good_sectors,
+			       int block_sectors);
+extern void scsi_make_blocked_list(void);
 extern void scsi_queue_next_request(request_queue_t * q, Scsi_Cmnd * =
SCpnt);
+extern void scsi_request_fn(request_queue_t * q);
=20
-extern int scsi_insert_special_cmd(Scsi_Cmnd * SCpnt, int);
-extern int scsi_dispatch_cmd(Scsi_Cmnd * SCpnt);
=20
 /*
  * Prototypes for functions in scsi.c
  */
-
-/*
- *  scsi_abort aborts the current command that is executing on host =
host.
- *  The error code, if non zero is returned in the host byte, otherwise =

- *  DID_ABORT is returned in the hostbyte.
- */
-
+extern int scsi_dispatch_cmd(Scsi_Cmnd * SCpnt);
+extern void scsi_bottom_half_handler(void);
+extern void scsi_build_commandblocks(Scsi_Device * SDpnt);
+extern void scsi_done(Scsi_Cmnd * SCpnt);
+extern void scsi_finish_command(Scsi_Cmnd *);
+extern int scsi_retry_command(Scsi_Cmnd *);
+extern Scsi_Cmnd *scsi_allocate_device(Scsi_Device *, int, int);
+extern void scsi_release_command(Scsi_Cmnd *);
 extern void scsi_do_cmd(Scsi_Cmnd *, const void *cmnd,
 			void *buffer, unsigned bufflen,
 			void (*done) (struct scsi_cmnd *),
 			int timeout, int retries);
-
 extern void scsi_wait_cmd(Scsi_Cmnd *, const void *cmnd,
 			  void *buffer, unsigned bufflen,
 			  void (*done) (struct scsi_cmnd *),
 			  int timeout, int retries);
+extern int scsi_dev_init(void);
=20
-extern Scsi_Cmnd *scsi_allocate_device(Scsi_Device *, int, int);
-
-extern void scsi_release_command(Scsi_Cmnd *);
=20
+/*
+ * Prototypes for functions/data in hosts.c
+ */
 extern int max_scsi_hosts;
=20
+/*
+ * Prototypes for functions in scsi_proc.c
+ */
 extern void proc_print_scsidevice(Scsi_Device *, char *, int *, int);
 extern struct proc_dir_entry *proc_scsi;
=20
+/*
+ * Prototypes for functions in constants.c
+ */
 extern void print_command(unsigned char *);
 extern void print_sense(const char *, Scsi_Cmnd *);
 extern void print_driverbyte(int scsiresult);
 extern void print_hostbyte(int scsiresult);
+extern void print_status (int status);
=20
 /*
  *  The scsi_device struct contains what we know about each given scsi
  *  device.
+ *
+ * FIXME(eric) - one of the great regrets that I have is that I failed =
to define
+ * these structure elements as something like sdev_foo instead of foo.  =
This would
+ * make it so much easier to grep through sources and so forth.  I =
propose that
+ * all new elements that get added to these structures follow this =
convention.
+ * As time goes on and as people have the stomach for it, it should be =
possible to=20
+ * go back and retrofit at least some of the elements here with with =
the prefix.
  */
=20
 struct scsi_device {
@@ -538,6 +569,14 @@
 } Scsi_Pointer;
=20
=20
+/*
+ * FIXME(eric) - one of the great regrets that I have is that I failed =
to define
+ * these structure elements as something like sc_foo instead of foo.  =
This would
+ * make it so much easier to grep through sources and so forth.  I =
propose that
+ * all new elements that get added to these structures follow this =
convention.
+ * As time goes on and as people have the stomach for it, it should be =
possible to=20
+ * go back and retrofit at least some of the elements here with with =
the prefix.
+ */
 struct scsi_cmnd {
 /* private: */
 	/*
@@ -680,16 +719,6 @@
  */
 #define SCSI_MLQUEUE_HOST_BUSY   0x1055
 #define SCSI_MLQUEUE_DEVICE_BUSY 0x1056
-
-extern int scsi_mlqueue_insert(Scsi_Cmnd * cmd, int reason);
-
-extern Scsi_Cmnd *scsi_end_request(Scsi_Cmnd * SCpnt, int uptodate,
-				   int sectors);
-
-extern void scsi_io_completion(Scsi_Cmnd * SCpnt, int good_sectors,
-			       int block_sectors);
-
-extern struct Scsi_Device_Template *scsi_get_request_dev(struct request =
*);
=20
 #define SCSI_SLEEP(QUEUE, CONDITION) {		    \
     if (CONDITION) {			            \
Index: linux/drivers/scsi/scsi_dma.c
diff -u /dev/null linux/drivers/scsi/scsi_dma.c:1.3
--- /dev/null	Mon Jan 10 22:17:54 2000
+++ linux/drivers/scsi/scsi_dma.c	Sat Jan  8 21:19:18 2000
@@ -0,0 +1,442 @@
+/*
+ *  scsi_dma.c Copyright (C) 2000 Eric Youngdale
+ *
+ *  mid-level SCSI DMA bounce buffer allocator
+ *
+ */
+
+#define __NO_VERSION__
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/blk.h>
+
+
+#include "scsi.h"
+#include "hosts.h"
+#include "constants.h"
+
+#ifdef CONFIG_KMOD
+#include <linux/kmod.h>
+#endif
+
+/*
+ * PAGE_SIZE must be a multiple of the sector size (512).  True
+ * for all reasonably recent architectures (even the VAX...).
+ */
+#define SECTOR_SIZE		512
+#define SECTORS_PER_PAGE	(PAGE_SIZE/SECTOR_SIZE)
+
+#if SECTORS_PER_PAGE <=3D 8
+typedef unsigned char FreeSectorBitmap;
+#elif SECTORS_PER_PAGE <=3D 32
+typedef unsigned int FreeSectorBitmap;
+#else
+#error You lose.
+#endif
+
+/*
+ * Used for access to internal allocator used for DMA safe buffers.
+ */
+static spinlock_t allocator_request_lock =3D SPIN_LOCK_UNLOCKED;
+
+static FreeSectorBitmap *dma_malloc_freelist =3D NULL;
+static int need_isa_bounce_buffers;
+static unsigned int dma_sectors =3D 0;
+unsigned int scsi_dma_free_sectors =3D 0;
+unsigned int scsi_need_isa_buffer =3D 0;
+static unsigned char **dma_malloc_pages =3D NULL;
+
+/*
+ * Function:    scsi_malloc
+ *
+ * Purpose:     Allocate memory from the DMA-safe pool.
+ *
+ * Arguments:   len       - amount of memory we need.
+ *
+ * Lock status: No locks assumed to be held.  This function is =
SMP-safe.
+ *
+ * Returns:     Pointer to memory block.
+ *
+ * Notes:       Prior to the new queue code, this function was not =
SMP-safe.
+ *              This function can only allocate in units of sectors
+ *              (i.e. 512 bytes).
+ *
+ *              We cannot use the normal system allocator becuase we =
need
+ *              to be able to guarantee that we can process a complete =
disk
+ *              I/O request without touching the system allocator.  =
Think
+ *              about it - if the system were heavily swapping, and =
tried to
+ *              write out a block of memory to disk, and the SCSI code =
needed
+ *              to allocate more memory in order to be able to write =
the
+ *              data to disk, you would wedge the system.
+ */
+void *scsi_malloc(unsigned int len)
+{
+	unsigned int nbits, mask;
+	unsigned long flags;
+
+	int i, j;
+	if (len % SECTOR_SIZE !=3D 0 || len > PAGE_SIZE)
+		return NULL;
+
+	nbits =3D len >> 9;
+	mask =3D (1 << nbits) - 1;
+
+	spin_lock_irqsave(&allocator_request_lock, flags);
+
+	for (i =3D 0; i < dma_sectors / SECTORS_PER_PAGE; i++)
+		for (j =3D 0; j <=3D SECTORS_PER_PAGE - nbits; j++) {
+			if ((dma_malloc_freelist[i] & (mask << j)) =3D=3D 0) {
+				dma_malloc_freelist[i] |=3D (mask << j);
+				scsi_dma_free_sectors -=3D nbits;
+#ifdef DEBUG
+				SCSI_LOG_MLQUEUE(3, printk("SMalloc: %d %p [From:%p]\n", len, =
dma_malloc_pages[i] + (j << 9)));
+				printk("SMalloc: %d %p [From:%p]\n", len, dma_malloc_pages[i] + (j =
<< 9));
+#endif
+				spin_unlock_irqrestore(&allocator_request_lock, flags);
+				return (void *) ((unsigned long) dma_malloc_pages[i] + (j << 9));
+			}
+		}
+	spin_unlock_irqrestore(&allocator_request_lock, flags);
+	return NULL;		/* Nope.  No more */
+}
+
+/*
+ * Function:    scsi_free
+ *
+ * Purpose:     Free memory into the DMA-safe pool.
+ *
+ * Arguments:   ptr       - data block we are freeing.
+ *              len       - size of block we are freeing.
+ *
+ * Lock status: No locks assumed to be held.  This function is =
SMP-safe.
+ *
+ * Returns:     Nothing
+ *
+ * Notes:       This function *must* only be used to free memory
+ *              allocated from scsi_malloc().
+ *
+ *              Prior to the new queue code, this function was not =
SMP-safe.
+ *              This function can only allocate in units of sectors
+ *              (i.e. 512 bytes).
+ */
+int scsi_free(void *obj, unsigned int len)
+{
+	unsigned int page, sector, nbits, mask;
+	unsigned long flags;
+
+#ifdef DEBUG
+	unsigned long ret =3D 0;
+
+#ifdef __mips__
+	__asm__ __volatile__("move\t%0,$31":"=3Dr"(ret));
+#else
+	ret =3D __builtin_return_address(0);
+#endif
+	printk("scsi_free %p %d\n", obj, len);
+	SCSI_LOG_MLQUEUE(3, printk("SFree: %p %d\n", obj, len));
+#endif
+
+	spin_lock_irqsave(&allocator_request_lock, flags);
+
+	for (page =3D 0; page < dma_sectors / SECTORS_PER_PAGE; page++) {
+		unsigned long page_addr =3D (unsigned long) dma_malloc_pages[page];
+		if ((unsigned long) obj >=3D page_addr &&
+		    (unsigned long) obj < page_addr + PAGE_SIZE) {
+			sector =3D (((unsigned long) obj) - page_addr) >> 9;
+
+			nbits =3D len >> 9;
+			mask =3D (1 << nbits) - 1;
+
+			if ((mask << sector) >=3D (1 << SECTORS_PER_PAGE))
+				panic("scsi_free:Bad memory alignment");
+
+			if ((dma_malloc_freelist[page] &
+			     (mask << sector)) !=3D (mask << sector)) {
+#ifdef DEBUG
+				printk("scsi_free(obj=3D%p, len=3D%d) called from %08lx\n",
+				       obj, len, ret);
+#endif
+				panic("scsi_free:Trying to free unused memory");
+			}
+			scsi_dma_free_sectors +=3D nbits;
+			dma_malloc_freelist[page] &=3D ~(mask << sector);
+			spin_unlock_irqrestore(&allocator_request_lock, flags);
+			return 0;
+		}
+	}
+	panic("scsi_free:Bad offset");
+}
+
+
+/*
+ * Function:    scsi_resize_dma_pool
+ *
+ * Purpose:     Ensure that the DMA pool is sufficiently large to be
+ *              able to guarantee that we can always process I/O =
requests
+ *              without calling the system allocator.
+ *
+ * Arguments:   None.
+ *
+ * Lock status: No locks assumed to be held.  This function is =
SMP-safe.
+ *
+ * Returns:     Nothing
+ *
+ * Notes:       Prior to the new queue code, this function was not =
SMP-safe.
+ *              Go through the device list and recompute the most =
appropriate
+ *              size for the dma pool.  Then grab more memory (as =
required).
+ */
+void scsi_resize_dma_pool(void)
+{
+	int i, k;
+	unsigned long size;
+	unsigned long flags;
+	struct Scsi_Host *shpnt;
+	struct Scsi_Host *host =3D NULL;
+	Scsi_Device *SDpnt;
+	FreeSectorBitmap *new_dma_malloc_freelist =3D NULL;
+	unsigned int new_dma_sectors =3D 0;
+	unsigned int new_need_isa_buffer =3D 0;
+	unsigned char **new_dma_malloc_pages =3D NULL;
+	int out_of_space =3D 0;
+
+	spin_lock_irqsave(&allocator_request_lock, flags);
+
+	if (!scsi_hostlist) {
+		/*
+		 * Free up the DMA pool.
+		 */
+		if (scsi_dma_free_sectors !=3D dma_sectors)
+			panic("SCSI DMA pool memory leak %d %d\n", scsi_dma_free_sectors, =
dma_sectors);
+
+		for (i =3D 0; i < dma_sectors / SECTORS_PER_PAGE; i++)
+			free_pages((unsigned long) dma_malloc_pages[i], 0);
+		if (dma_malloc_pages)
+			kfree((char *) dma_malloc_pages);
+		dma_malloc_pages =3D NULL;
+		if (dma_malloc_freelist)
+			kfree((char *) dma_malloc_freelist);
+		dma_malloc_freelist =3D NULL;
+		dma_sectors =3D 0;
+		scsi_dma_free_sectors =3D 0;
+		spin_unlock_irqrestore(&allocator_request_lock, flags);
+		return;
+	}
+	/* Next, check to see if we need to extend the DMA buffer pool */
+
+	new_dma_sectors =3D 2 * SECTORS_PER_PAGE;		/* Base value we use */
+
+	if (__pa(high_memory) - 1 > ISA_DMA_THRESHOLD)
+		need_isa_bounce_buffers =3D 1;
+	else
+		need_isa_bounce_buffers =3D 0;
+
+	if (scsi_devicelist)
+		for (shpnt =3D scsi_hostlist; shpnt; shpnt =3D shpnt->next)
+			new_dma_sectors +=3D SECTORS_PER_PAGE;	/* Increment for each host */
+
+	for (host =3D scsi_hostlist; host; host =3D host->next) {
+		for (SDpnt =3D host->host_queue; SDpnt; SDpnt =3D SDpnt->next) {
+			/*
+			 * sd and sr drivers allocate scatterlists.
+			 * sr drivers may allocate for each command 1x2048 or 2x1024 extra
+			 * buffers for 2k sector size and 1k fs.
+			 * sg driver allocates buffers < 4k.
+			 * st driver does not need buffers from the dma pool.
+			 * estimate 4k buffer/command for devices of unknown type (should =
panic).
+			 */
+			if (SDpnt->type =3D=3D TYPE_WORM || SDpnt->type =3D=3D TYPE_ROM ||
+			    SDpnt->type =3D=3D TYPE_DISK || SDpnt->type =3D=3D TYPE_MOD) {
+				new_dma_sectors +=3D ((host->sg_tablesize *
+				sizeof(struct scatterlist) + 511) >> 9) *
+				 SDpnt->queue_depth;
+				if (SDpnt->type =3D=3D TYPE_WORM || SDpnt->type =3D=3D TYPE_ROM)
+					new_dma_sectors +=3D (2048 >> 9) * SDpnt->queue_depth;
+			} else if (SDpnt->type =3D=3D TYPE_SCANNER ||
+				   SDpnt->type =3D=3D TYPE_PROCESSOR ||
+				   SDpnt->type =3D=3D TYPE_MEDIUM_CHANGER ||
+				   SDpnt->type =3D=3D TYPE_ENCLOSURE) {
+				new_dma_sectors +=3D (4096 >> 9) * SDpnt->queue_depth;
+			} else {
+				if (SDpnt->type !=3D TYPE_TAPE) {
+					printk("resize_dma_pool: unknown device type %d\n", SDpnt->type);
+					new_dma_sectors +=3D (4096 >> 9) * SDpnt->queue_depth;
+				}
+			}
+
+			if (host->unchecked_isa_dma &&
+			    need_isa_bounce_buffers &&
+			    SDpnt->type !=3D TYPE_TAPE) {
+				new_dma_sectors +=3D (PAGE_SIZE >> 9) * host->sg_tablesize *
+				    SDpnt->queue_depth;
+				new_need_isa_buffer++;
+			}
+		}
+	}
+
+#ifdef DEBUG_INIT
+	printk("resize_dma_pool: needed dma sectors =3D %d\n", =
new_dma_sectors);
+#endif
+
+	/* limit DMA memory to 32MB: */
+	new_dma_sectors =3D (new_dma_sectors + 15) & 0xfff0;
+
+	/*
+	 * We never shrink the buffers - this leads to
+	 * race conditions that I would rather not even think
+	 * about right now.
+	 */
+#if 0				/* Why do this? No gain and risks out_of_space */
+	if (new_dma_sectors < dma_sectors)
+		new_dma_sectors =3D dma_sectors;
+#endif
+	if (new_dma_sectors <=3D dma_sectors) {
+		spin_unlock_irqrestore(&allocator_request_lock, flags);
+		return;		/* best to quit while we are in front */
+        }
+
+	for (k =3D 0; k < 20; ++k) {	/* just in case */
+		out_of_space =3D 0;
+		size =3D (new_dma_sectors / SECTORS_PER_PAGE) *
+		    sizeof(FreeSectorBitmap);
+		new_dma_malloc_freelist =3D (FreeSectorBitmap *)
+		    kmalloc(size, GFP_ATOMIC);
+		if (new_dma_malloc_freelist) {
+                        memset(new_dma_malloc_freelist, 0, size);
+			size =3D (new_dma_sectors / SECTORS_PER_PAGE) *
+			    sizeof(*new_dma_malloc_pages);
+			new_dma_malloc_pages =3D (unsigned char **)
+			    kmalloc(size, GFP_ATOMIC);
+			if (!new_dma_malloc_pages) {
+				size =3D (new_dma_sectors / SECTORS_PER_PAGE) *
+				    sizeof(FreeSectorBitmap);
+				kfree((char *) new_dma_malloc_freelist);
+				out_of_space =3D 1;
+			} else {
+                                memset(new_dma_malloc_pages, 0, size);
+                        }
+		} else
+			out_of_space =3D 1;
+
+		if ((!out_of_space) && (new_dma_sectors > dma_sectors)) {
+			for (i =3D dma_sectors / SECTORS_PER_PAGE;
+			   i < new_dma_sectors / SECTORS_PER_PAGE; i++) {
+				new_dma_malloc_pages[i] =3D (unsigned char *)
+				    __get_free_pages(GFP_ATOMIC | GFP_DMA, 0);
+				if (!new_dma_malloc_pages[i])
+					break;
+			}
+			if (i !=3D new_dma_sectors / SECTORS_PER_PAGE) {	/* clean up */
+				int k =3D i;
+
+				out_of_space =3D 1;
+				for (i =3D 0; i < k; ++i)
+					free_pages((unsigned long) new_dma_malloc_pages[i], 0);
+			}
+		}
+		if (out_of_space) {	/* try scaling down new_dma_sectors request */
+			printk("scsi::resize_dma_pool: WARNING, dma_sectors=3D%u, "
+			       "wanted=3D%u, scaling\n", dma_sectors, new_dma_sectors);
+			if (new_dma_sectors < (8 * SECTORS_PER_PAGE))
+				break;	/* pretty well hopeless ... */
+			new_dma_sectors =3D (new_dma_sectors * 3) / 4;
+			new_dma_sectors =3D (new_dma_sectors + 15) & 0xfff0;
+			if (new_dma_sectors <=3D dma_sectors)
+				break;	/* stick with what we have got */
+		} else
+			break;	/* found space ... */
+	}			/* end of for loop */
+	if (out_of_space) {
+		spin_unlock_irqrestore(&allocator_request_lock, flags);
+		scsi_need_isa_buffer =3D new_need_isa_buffer;	/* some useful info */
+		printk("      WARNING, not enough memory, pool not expanded\n");
+		return;
+	}
+	/* When we dick with the actual DMA list, we need to
+	 * protect things
+	 */
+	if (dma_malloc_freelist) {
+		size =3D (dma_sectors / SECTORS_PER_PAGE) * sizeof(FreeSectorBitmap);
+		memcpy(new_dma_malloc_freelist, dma_malloc_freelist, size);
+		kfree((char *) dma_malloc_freelist);
+	}
+	dma_malloc_freelist =3D new_dma_malloc_freelist;
+
+	if (dma_malloc_pages) {
+		size =3D (dma_sectors / SECTORS_PER_PAGE) * =
sizeof(*dma_malloc_pages);
+		memcpy(new_dma_malloc_pages, dma_malloc_pages, size);
+		kfree((char *) dma_malloc_pages);
+	}
+	scsi_dma_free_sectors +=3D new_dma_sectors - dma_sectors;
+	dma_malloc_pages =3D new_dma_malloc_pages;
+	dma_sectors =3D new_dma_sectors;
+	scsi_need_isa_buffer =3D new_need_isa_buffer;
+
+	spin_unlock_irqrestore(&allocator_request_lock, flags);
+
+#ifdef DEBUG_INIT
+	printk("resize_dma_pool: dma free sectors   =3D %d\n", =
scsi_dma_free_sectors);
+	printk("resize_dma_pool: dma sectors        =3D %d\n", dma_sectors);
+	printk("resize_dma_pool: need isa buffers   =3D %d\n", =
scsi_need_isa_buffer);
+#endif
+}
+
+/*
+ * Function:    scsi_init_minimal_dma_pool
+ *
+ * Purpose:     Allocate a minimal (1-page) DMA pool.
+ *
+ * Arguments:   None.
+ *
+ * Lock status: No locks assumed to be held.  This function is =
SMP-safe.
+ *
+ * Returns:     Nothing
+ *
+ * Notes:      =20
+ */
+int scsi_init_minimal_dma_pool(void)
+{
+	unsigned long size;
+	unsigned long flags;
+	int has_space =3D 0;
+
+	spin_lock_irqsave(&allocator_request_lock, flags);
+
+	dma_sectors =3D PAGE_SIZE / SECTOR_SIZE;
+	scsi_dma_free_sectors =3D dma_sectors;
+	/*
+	 * Set up a minimal DMA buffer list - this will be used during =
scan_scsis
+	 * in some cases.
+	 */
+
+	/* One bit per sector to indicate free/busy */
+	size =3D (dma_sectors / SECTORS_PER_PAGE) * sizeof(FreeSectorBitmap);
+	dma_malloc_freelist =3D (FreeSectorBitmap *)
+	    kmalloc(size, GFP_ATOMIC);
+	if (dma_malloc_freelist) {
+                memset(dma_malloc_freelist, 0, size);
+		/* One pointer per page for the page list */
+		dma_malloc_pages =3D (unsigned char **) kmalloc(
+                        (dma_sectors / SECTORS_PER_PAGE) * =
sizeof(*dma_malloc_pages),
+							     GFP_ATOMIC);
+		if (dma_malloc_pages) {
+                        memset(dma_malloc_pages, 0, size);
+			dma_malloc_pages[0] =3D (unsigned char *)
+			    __get_free_pages(GFP_ATOMIC | GFP_DMA, 0);
+			if (dma_malloc_pages[0])
+				has_space =3D 1;
+		}
+	}
+	if (!has_space) {
+		if (dma_malloc_freelist) {
+			kfree((char *) dma_malloc_freelist);
+			if (dma_malloc_pages)
+				kfree((char *) dma_malloc_pages);
+		}
+		spin_unlock_irqrestore(&allocator_request_lock, flags);
+		printk("scsi::init_module: failed, out of memory\n");
+		return 1;
+	}
+
+	spin_unlock_irqrestore(&allocator_request_lock, flags);
+	return 0;
+}
Index: linux/drivers/scsi/scsi_lib.c
diff -u linux/drivers/scsi/scsi_lib.c:1.1.1.5 =
linux/drivers/scsi/scsi_lib.c:1.8
--- linux/drivers/scsi/scsi_lib.c:1.1.1.5	Fri Jan  7 22:33:08 2000
+++ linux/drivers/scsi/scsi_lib.c	Mon Jan 10 21:44:27 2000
@@ -51,6 +51,13 @@
  */
=20
 /*
+ * For hosts that request single-file access to the ISA bus, this is a =
pointer to
+ * the currently active host.
+ */
+volatile struct Scsi_Host *host_active =3D NULL;
+
+
+/*
  * Function:    scsi_insert_special_cmd()
  *
  * Purpose:     Insert pre-formed command into request queue.
@@ -202,6 +209,23 @@
  *              If SCpnt is NULL, it means that the previous command
  *              was completely finished, and we should simply start
  *              a new command, if possible.
+ *
+ *		This is where a lot of special case code has begun to
+ *		accumulate.  It doesn't really affect readability or
+ *		anything, but it might be considered architecturally
+ *		inelegant.  If more of these special cases start to
+ *		accumulate, I am thinking along the lines of implementing
+ *		an atexit() like technology that gets run when commands
+ *		complete.  I am not convinced that it is worth the
+ *		added overhead, however.  Right now as things stand,
+ *		there are simple conditional checks, and most hosts
+ *		would skip past.
+ *
+ *		Another possible solution would be to tailor different
+ *		handler functions, sort of like what we did in scsi_merge.c.
+ *		This is probably a better solution, but the number of different
+ *		permutations grows as 2**N, and if too many more special cases
+ *		get added, we start to get screwed.
  */
 void scsi_queue_next_request(request_queue_t * q, Scsi_Cmnd * SCpnt)
 {
@@ -287,6 +311,48 @@
 			SHpnt->some_device_starved =3D 0;
 		}
 	}
+
+	/*
+	 * This is the code to deal with blocked hosts.  The idea is that if =
the current host is blocked,
+	 * yet the current host is inactive (as we have completed all requests =
outstanding, and the
+	 * current host was the "owner", then we walk through the list and =
search for a new owner,
+	 * and queue up some commands for those devices.
+	 */
+#ifdef CONFIG_SCSI_HOST_BLOCK
+	if(    SDpnt->host->block !=3D NULL
+	    && host_active !=3D NULL
+	    && SDpnt->host =3D=3D host_active
+	    && host_active->host_busy =3D=3D 0 ) {
+		/*
+		 * No host currently active.  Look for someone who is idle and who =
has requests.
+		 */
+		host_active =3D NULL;
+	=09
+		for(SHpnt =3D SDpnt->host->block; SHpnt !=3D SDpnt->host; SHpnt =3D =
SHpnt->block) {
+			for (SDpnt =3D SHpnt->host_queue; SDpnt; SDpnt =3D SDpnt->next) {
+				request_queue_t *q;
+				if ((SHpnt->can_queue > 0 && (SHpnt->host_busy >=3D =
SHpnt->can_queue))
+				    || (SHpnt->host_blocked)) {
+					break;
+				}
+				if (SDpnt->device_blocked || !SDpnt->starved) {
+					continue;
+				}
+				q =3D &SDpnt->request_queue;
+				q->request_fn(q);
+				all_clear =3D 0;
+			}
+			if (SDpnt =3D=3D NULL && all_clear) {
+				SHpnt->some_device_starved =3D 0;
+			}
+			if( host_active !=3D NULL )
+			{
+				break;
+			}
+		}
+	}
+#endif
+
 	spin_unlock_irqrestore(&io_request_lock, flags);
 }
=20
@@ -732,6 +798,56 @@
 }
=20
 /*
+ * Function:    scsi_blocked_request_fn()
+ *
+ * Purpose:     A request function wrapper for SCSI hosts that have =
blocking enabled.
+ *
+ * Arguments:   q       - Pointer to actual queue.
+ *
+ * Returns:     Nothing
+ *
+ * Lock status: IO request lock assumed to be held when called.
+ *
+ * Notes:
+ */
+void scsi_blocked_request_fn(request_queue_t * q)
+{
+	Scsi_Device *SDpnt;
+	struct Scsi_Host *SHpnt;
+
+	ASSERT_LOCK(&io_request_lock, 1);
+
+	SDpnt =3D (Scsi_Device *) q->queuedata;
+	if (!SDpnt) {
+		panic("Missing device");
+	}
+	SHpnt =3D SDpnt->host;
+
+	/*
+	 * If this host is currently blocked, then ignore the request for now.
+	 */
+	if( host_active !=3D NULL && host_active !=3D SHpnt ) {
+		return;
+	}
+
+	if( host_active =3D=3D NULL ) {
+		host_active =3D SHpnt;
+	}
+
+	/*
+	 * At this point, call the normal request function.
+	 */
+	scsi_request_fn(q);
+
+	/*
+	 * If the host isn't active, take it back again.
+	 */
+	if( host_active =3D=3D SHpnt && SHpnt->host_busy =3D=3D 0 ) {
+		host_active =3D NULL;
+	}
+}
+
+/*
  * Function:    scsi_request_fn()
  *
  * Purpose:     Generic version of request function for SCSI hosts.
@@ -985,3 +1101,91 @@
 		spin_lock_irq(&io_request_lock);
 	}
 }
+
+/*
+ * Function:    scsi_make_blocked_list
+ *
+ * Purpose:     Build linked list of hosts that require blocking.
+ *
+ * Arguments:   None.
+ *
+ * Returns:     Nothing
+ *
+ * Notes:       Blocking is sort of a hack that is used to prevent more =
than one
+ *              host adapter from being active at one time.  This is =
used in cases
+ *              where the ISA bus becomes unreliable if you have more =
than one
+ *              host adapter really pumping data through.
+ *
+ *              We spent a lot of time examining the problem, and I =
*believe* that
+ *              the problem is bus related as opposed to being a driver =
bug.
+ *
+ *              The blocked list is used as part of the synchronization =
object
+ *              that we use to ensure that only one host is active at =
one time.
+ *              I (ERY) would like to make this go away someday, but =
this would
+ *              require that we have a recursive mutex object.
+ *
+ *		Note2: Now I wish I remember what I meant by that, because we now =
have
+ *		reader-writer locks...
+ */
+
+void scsi_make_blocked_list(void)
+{
+#ifdef CONFIG_SCSI_HOST_BLOCK
+	int block_count =3D 0, index;
+	struct Scsi_Host *sh[128], *shpnt;
+
+	/*
+	 * Create a circular linked list from the scsi hosts which have
+	 * the "wish_block" field in the Scsi_Host structure set.
+	 * The blocked list should include all the scsi hosts using ISA DMA.
+	 * In some systems, using two dma channels simultaneously causes
+	 * unpredictable results.
+	 * Among the scsi hosts in the blocked list, only one host at a time
+	 * is allowed to have active commands queued. The transition from
+	 * one active host to the next one is allowed only when host_busy =
=3D=3D 0
+	 * for the active host (which implies host_busy =3D=3D 0 for all the =
hosts
+	 * in the list). Moreover for block devices the transition to a new
+	 * active host is allowed only when a request is completed, since a
+	 * block device request can be divided into multiple scsi commands
+	 * (when there are few sg lists or clustering is disabled).
+	 *
+	 * (DB, 4 Feb 1995)
+	 */
+
+
+	host_active =3D NULL;
+
+	for (shpnt =3D scsi_hostlist; shpnt; shpnt =3D shpnt->next) {
+
+#if 0
+		/*
+		 * Is this is a candidate for the blocked list?
+		 * Useful to put into the blocked list all the hosts whose driver
+		 * does not know about the host->block feature.
+		 */
+		if (shpnt->unchecked_isa_dma)
+			shpnt->wish_block =3D 1;
+#endif
+
+		if (shpnt->wish_block)
+			sh[block_count++] =3D shpnt;
+	}
+
+	if (block_count =3D=3D 1)
+		sh[0]->block =3D NULL;
+
+	else if (block_count > 1) {
+
+		for (index =3D 0; index < block_count - 1; index++) {
+			sh[index]->block =3D sh[index + 1];
+			printk("scsi%d : added to blocked host list.\n",
+			       sh[index]->host_no);
+		}
+
+		sh[block_count - 1]->block =3D sh[0];
+		printk("scsi%d : added to blocked host list.\n",
+		       sh[index]->host_no);
+	}
+#endif
+}
+
Index: linux/include/linux/isapnp.h
diff -u linux/include/linux/isapnp.h:1.1.1.1 =
linux/include/linux/isapnp.h:1.2
--- linux/include/linux/isapnp.h:1.1.1.1	Sat Jan  8 12:51:32 2000
+++ linux/include/linux/isapnp.h	Sat Jan  8 13:07:09 2000
@@ -173,21 +173,21 @@
 extern inline void isapnp_write_byte(unsigned char idx, unsigned char =
val) { ; }
 extern inline void isapnp_write_word(unsigned char idx, unsigned short =
val) { ; }
 extern inline void isapnp_write_dword(unsigned char idx, unsigned int =
val) { ; }
-extern void isapnp_wake(unsigned char csn) { ; }
-extern void isapnp_device(unsigned char device) { ; }
-extern void isapnp_activate(unsigned char device) { ; }
-extern void isapnp_deactivate(unsigned char device) { ; }
+extern inline void isapnp_wake(unsigned char csn) { ; }
+extern inline void isapnp_device(unsigned char device) { ; }
+extern inline void isapnp_activate(unsigned char device) { ; }
+extern inline void isapnp_deactivate(unsigned char device) { ; }
 /* manager */
-extern struct pci_bus *isapnp_find_card(unsigned short vendor,
-				        unsigned short device,
-				        struct pci_bus *from) { return NULL; }
-extern struct pci_dev *isapnp_find_dev(struct pci_bus *card,
-				       unsigned short vendor,
-				       unsigned short function,
+extern inline struct pci_bus *isapnp_find_card(unsigned short vendor,
+					       unsigned short device,
+					       struct pci_bus *from) { return NULL; }
+extern inline struct pci_dev *isapnp_find_dev(struct pci_bus *card,
+					      unsigned short vendor,
+					      unsigned short function,
 				       struct pci_dev *from) { return NULL; }
-extern void isapnp_resource_change(struct resource *resource,
-				   unsigned long start,
-				   unsigned long size) { ; }
+extern inline void isapnp_resource_change(struct resource *resource,
+					  unsigned long start,
+					  unsigned long size) { ; }
=20
 #endif /* CONFIG_ISAPNP */
=20
Index: linux/include/linux/proc_fs.h
diff -u linux/include/linux/proc_fs.h:1.1.1.1 =
linux/include/linux/proc_fs.h:1.2
--- linux/include/linux/proc_fs.h:1.1.1.1	Sat Jan  8 12:51:31 2000
+++ linux/include/linux/proc_fs.h	Sat Jan  8 13:07:09 2000
@@ -181,11 +181,11 @@
 	mode_t mode, struct proc_dir_entry *parent) { return NULL; }
=20
 extern inline void remove_proc_entry(const char *name, struct =
proc_dir_entry *parent) {};
-extern inline proc_dir_entry *proc_symlink(const char *name,
+extern inline struct proc_dir_entry *proc_symlink(const char *name,
 		struct proc_dir_entry *parent,char *dest) {return NULL;}
-extern inline proc_dir_entry *proc_mknod(const char *name,mode_t mode,
+extern inline struct proc_dir_entry *proc_mknod(const char *name,mode_t =
mode,
 		struct proc_dir_entry *parent,kdev_t rdev) {return NULL;}
-extern struct proc_dir_entry *proc_mkdir(const char *name,
+extern inline struct proc_dir_entry *proc_mkdir(const char *name,
 	struct proc_dir_entry *parent) {return NULL;}
=20
 extern inline struct proc_dir_entry *create_proc_read_entry(const char =
*name,

------=_NextPart_000_0072_01BF5D57.511A9DE0--


-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.rutgers.edu

home help back first fref pref prev next nref lref last post