Thank you for the information. I totally missed the -m / '--maps' option. I gave a kudo for this.
Unfortunatly, I am afraid the informations given by the "-m" are very light regarding a mirrored raid1 logical_volume: it gives some details about the "internal" Logical Volumes but I still don't know how a given logcal extend is mapped to physical extent(s).
To clarify, here is a small test I did:
- created a vg with 2 x 10 GB san virtual volumes from a given storage array, and 2 other 10 GB from an other array
- tagged 2 virtual volumes from one array with BDX, and the 2 others with LAC :
# pvs -o name,vg_name,tags
PV VG PV Tags
/dev/mapper/YRO_BDX_POCSGLXPK10_01 vg_sglxpk10 BDX
/dev/mapper/YRO_BDX_POCSGLXPK10_02 vg_sglxpk10 BDX
/dev/mapper/YRO_LAC_POCSGLXPK10_01 vg_sglxpk10 LAC
/dev/mapper/YRO_LAC_POCSGLXPK10_02 vg_sglxpk10 LAC
- then, I tried to create a 11GB mirrored volume. Because the size is greater than one san volume, the logical volume will lie on 2 san volumes and the mirror on the other 2. The challenge was to have all extends from one source (for example BDX) mirrored on the other source (LAC). So I tried this :
lvcreate -n lvol1 -L 11G -m 1 --type raid1 vg_sglxpk10
If I try to analyze how extends are distributed against physical volumes with the –m option:
# lvdisplay -m /dev/vg_sglxpk10/lvol1
.../...
--- Segments ---
Logical extents 0 to 2815:
Type raid1
Monitoring monitored
Raid Data LV 0
Logical volume lvol1_rimage_0
Logical extents 0 to 2815
Raid Data LV 1
Logical volume lvol1_rimage_1
Logical extents 0 to 2815
Raid Metadata LV 0 lvol1_rmeta_0
Raid Metadata LV 1 lvol1_rmeta_1
I can see the distribution against "internal" Logical Volume lvol1_rimage_0 and lvol1_rimage_1, not directly against Physical Volumes. So I am unable to qualify if the mirroring is done between 2 volumes from the same array [ bad :-( ]or from a different array [ nice :-) ]
And in the case of this test there is a real problem:
- First, _rimage_0 lies on 1 volume from an array and 1 volume from the other array. Same thing for the internal lvol _rimage_1. It means that a logical extend in this configuration could be mirrored between a physical extent from one storage and an other one from the same array.
It can be checked like this:
# lvs -a -o name,vg_name,devices vg_sglxpk10
LV VG Devices
lvol1 vg_sglxpk10 lvol1_rimage_0(0),lvol1_rimage_1(0)
[lvol1_rimage_0] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_01(1)
[lvol1_rimage_0] vg_sglxpk10 /dev/mapper/YRO_LAC_POCSGLXPK10_02(0)
[lvol1_rimage_1] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_02(1)
[lvol1_rimage_1] vg_sglxpk10 /dev/mapper/YRO_LAC_POCSGLXPK10_01(0)
[lvol1_rmeta_0] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_01(0)
[lvol1_rmeta_1] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_02(0)
- Secondly, I can see from a vgdisplay the global allocation of physical extents. And it clearly shows that both volumes from the "BDX" location are full. So probably the mirroring is done between those 2 volumes for the first 2556 extends, and the two volumes from the array located @ "LAC" for the last 260 extends. Extract :
# vgdisplay -v vg_sglxpk10
--- Physical volumes ---
PV Name /dev/mapper/YRO_BDX_POCSGLXPK10_01
Total PE / Free PE 2556 / 0
PV Name /dev/mapper/YRO_BDX_POCSGLXPK10_02
Total PE / Free PE 2556 / 0
PV Name /dev/mapper/YRO_LAC_POCSGLXPK10_02
Total PE / Free PE 2556 / 2295
PV Name /dev/mapper/YRO_LAC_POCSGLXPK10_01
Total PE / Free PE 2556 / 2295
In a summary I can say there is a problem but I can't analyze it in further details … that's my problem. And the case presented here is rather simple but could be more complex in "real world" IT.
Eric