--
\\|//
(o o)
ooO-(-)-Ooo------------------------------------------------------------
Email : Bernd.Patolla_at_mail.afibs.ch Bernd Patolla
Phone : (++41) 61 267 6536 Amt f"ur Informatik Basel-Stadt
Fax : (++41) 61 267 9860 Postfach CH 4003 Basel
X400 : c=ch;a=400net;p=adminbs;o=afi;cn=Bernd Patolla
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++
RAID stands for Redundant Array of Inexpensive Disks. The
key words are Redundant and Array. It takes more than one
disk to make an array. Making your JBOD (Just a Bunch Of
Disks) an array cause it combine two or more of the disks
into an array. This array is presented to the host as a
single larger device.
The Redundant uses one or more of the disks in the array
to store additional information about the data that can
be used to regenerate the data should one member be lost.
Mirroring (RAID-1) simply duplicates the data among multiple
disks. RAID-5 stores the XOR (eXclusive OR) of the data
that is distributed among the data area of the disks.
What is sometimes called RAID-0, is Striping and doesn't
offer any redundancy (hence the 0), but does offer very
good performance.
It is worth noting that making the disks a RAID almost
certainly caused the data on those disks to be erased.
I hope you had a backup before doing it.
Neither UFS nor Advfs care whether the underlying device
is a RAID or a single disk. If you have backups you can
partition the RAID just like any other disk and restore
the system to that. You'll have to update /etc/fstab and
/etc/rc.config to take into account that the file systems
have moved.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++
When using RAID (0,1,3 or 5), the sets you build are always viewed
as a single logical volume. I've not experienced it, since I only use
AdvFS, but I've heard that problems using UFS on hardware
RAID volumes.
Regards, Helgi.
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Helgi Viggosson, | Internet: helgi_at_ott.is
Software Product Mgr, | X.25 (PSI%): PSI%274011324040::HELGI
OTT Ltd, | X.400: G=Helgi S=Viggosson P=OTT A=ISHOLF C=IS
Skeifan 17, | Phone: +354 533 5050
IS-108 Reykjavik, ICELAND | FAX: +354 533 5060
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++
This is expected behavior. The hardware presents a logical drive (your RAID-5
set) as one device (re0) to the host. The host has no knowledge of how
the underlying disks are structured (this isn't really true since the
swxcrmgr utility can get at the information for fault management purposes
only).
All the system sees is one big disk.
> My question is this: how can I set up all of my original UFS
> filesystems on this new RAID logical drive. Are there other device files I
> should be using? Is it a requirement to move to AdvFS or can I restore my
> original UFS filesystems. I appreciate any help and will summarize. Thanks.
You will have to partition your RAID-5 "disk drive" in order to put the
original
file systems back on this raid set the way you'd like.
For example, you can re-partition re0 as follows:
re0a - /
re0b - swap
re0d - /usr
re0e - /data1
re0f - /data2
re0g - /data3
re0h - /data4
Or you can use LSM to divvy up the remainder of the disk.
You can use either UFS or AdvFS. Both support greater than 2GB file systems.
However, you should really consider the performance implications of
what you're trying to do.
What you're now doing is to place all your data on (logically) one big
disk. Before, you had some control over where your data was being placed.
Now the entire load is being split over all the disks, (which may or may not
be bad, but realize what tradeoffs you're making for performance tuning
capabilities) and you're incurring additional overhead
for doing the RAID-5 parity calculations. If you've got a heavy write
environment, the parity update can drag down performance unless you have a
PCI-based KZPSC with the battery backed cache and write-back caching enabled.
If you've got a 1-channel RAID controller, you are throttling all I/O through
1 SCSI bus. A 3-channel controller will allow some simultanous transfers of
information, which may be helpful in heavy I/O configurations.
Also, realize that with a RAID-5 configuration of 3 disks, you have lost
1/3 of the usable capacity of your system (due to the parity disk for
availability).
There are lots of trade-offs when making the jump to a RAID-5 configuation.
--------
+---------------------------+tm Paul E. Rockwell
| | | | | | | | UNIX Sales Support Consultant
| d | i | g | i | t | a | l | Digital Equipment Corporation
| | | | | | | | 500 Enterprise Drive
+---------------------------+ Rocky Hill, CT 06067
Internet: rockwell_at_rch.dec.com Phone: (203)258-5022
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++
> disklabel -rw re0 SWXCR
Correct. All unix sees is one (large) drive.
> re1 and re2 are no longer recognized, as the 3 disks appear to be
> grouped as one volume (re0).
That's what it's supposed to do.
> My question is this: how can I set up all of my original UFS
> filesystems on this new RAID logical drive. Are there other device files I
> should be using? Is it a requirement to move to AdvFS or can I restore my
> original UFS filesystems. I appreciate any help and will summarize. Thanks.
Just partition this new large drive into appropriate-sized pieces, add
the file-system of your choice and stir ;-). The ufs won't be able to
handle a partition size larger than 2G, but it looks like you'll want to
have your partitions smaller than that anyway.
There *is* yet another way to do things. You can group all three disks
into one RAID5 group, and then, when you're making logical drives, break
it up into however many logical disks you want. I've only done this as
an experiment, but it does seem to work. The only problem I can see is
that you then have no idea where your data is. But then, with RAID5 you
don't know that anyway. My preference would be to keep / and /usr on
identifiable disks for ease of recovery, but with RAID5 you shouldn't
need to worry about that unless two disks go bad at once.
One note: I saw somewhere in official documentation that / has to be on a
disk which is logical unit number 0.
--
Thomas Erskine <tom_at_clark.dgim.doc.ca> (613) 998-2836
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You are correct in that you now have one logical device.
You can use the disklabel command and create your 7
partitions root, swap, /usr, /data1-/data4 as a,b,d,e,f,g,h
and then mount those partitions. I would recommend that you
create the root, swap, /usr and then one big partiton for
your /data1-/data4 filesystems (a,b,g,h). Then you can use
restore to restore the data and merge the /data1-/data4 file
systems into a new filesystem (for example /data). You can
create links in the root directory so you can refer to your
old /data1 - /data4 pathnames.
Just a suggestion.
Good Luck,
Dave
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++
Right. Each re device corresponds to one of the logical RAID drives you
have defined. It sounds like you only have one right now. If you want
re1 and re2, you need to create the extra logical RAID drives in the
SXWCR config program.
My question is this: how can I set up all of my original UFS
filesystems on this new RAID logical drive. Are there other device files I
should be using? Is it a requirement to move to AdvFS or can I restore my
original UFS filesystems. I appreciate any help and will summarize.
Thanks.
AdvFS is great. (It is not a requirement, though -- you could just
create logical RAID drives to mirror your old set up.) I would just
leave the RAID array as one logical RAID drive, create a AdvFS file
domain on the whole thing, and create file sets for wherever you want to
mount some space.
I haven't tried restoring a UFS dump into an AdvFS filesystem.
Note that one of the physical disks is being used for parity in RAID 5,
so you will only have 2/3 the space of the JBOD setup.
--David Gadbois
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
Received on Fri Oct 06 1995 - 01:12:53 NZDT
This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:46 NZDT