Custom Query (684 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (25 - 27 of 684)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Ticket Resolution Summary Owner Reporter
#828 fixed pb with exclude Bruno Cornec sylvain
Description

Hi every one,

We have some trouble with mondoarchive. Our system is RHEL 5.11, the server we want to backup is connected to the backup server with nfs.

The packages we use :

afio-2.5-1.rhel5.x86_64.rpm            
buffer-1.19-4.rhel5.x86_64.rpm            
mindi-busybox-1.18.5-1.rhel5.x86_64.rpm
afio-debuginfo-2.5-1.rhel5.x86_64.rpm  
buffer-debuginfo-1.19-4.rhel5.x86_64.rpm  
mondo-3.0.2-1.rhel5.x86_64.rpm
mindi-2.1.3-1.rhel5.x86_64.rpm
/usr/sbin/mondoarchive -O -N -s 10000m -9 -n mondo01:/mondo -E "$EXCLUDE_DISK" -T ${DIR_HOST_TMP} -S ${DIR_HOST_SCRATCH} -p ${HOSTNAME} -d ${HOSTNAME}" > /mondo/${HOSTNAME}/mondoarchive2.cmd

we don't want to backup the following FS:

ls /dev/mapper/disk*
/dev/mapper/diskAgrBck      /dev/mapper/diskCwtBckp1    /dev/mapper/diskGpvBck      /dev/mapper/diskGriBinp1    /dev/mapper/diskIntBck
/dev/mapper/diskAgrBckp1    /dev/mapper/diskCwtBin      /dev/mapper/diskGpvBck2     /dev/mapper/diskGriData     /dev/mapper/diskIntBckp1
/dev/mapper/diskAgrBin      /dev/mapper/diskCwtBinp1    /dev/mapper/diskGpvBck2p1   /dev/mapper/diskGriData2    /dev/mapper/diskIntBin
/dev/mapper/diskAgrBinp1    /dev/mapper/diskCwtData     /dev/mapper/diskGpvBckp1    /dev/mapper/diskGriData2p1  /dev/mapper/diskIntBinp1
/dev/mapper/diskAgrData     /dev/mapper/diskCwtData2    /dev/mapper/diskGpvBin      /dev/mapper/diskGriDatap1   /dev/mapper/diskIntData
/dev/mapper/diskAgrData2    /dev/mapper/diskCwtData2p1  /dev/mapper/diskGpvBinp1    /dev/mapper/diskHarBck      /dev/mapper/diskIntData2
/dev/mapper/diskAgrData2p1  /dev/mapper/diskCwtDatap1   /dev/mapper/diskGpvData     /dev/mapper/diskHarBckp1    /dev/mapper/diskIntData2p1
/dev/mapper/diskAgrData3    /dev/mapper/diskDevBck      /dev/mapper/diskGpvData2    /dev/mapper/diskHarBin      /dev/mapper/diskIntDatap1
/dev/mapper/diskAgrData3p1  /dev/mapper/diskDevBck2     /dev/mapper/diskGpvData2p1  /dev/mapper/diskHarBinp1    /dev/mapper/diskRmnBck
/dev/mapper/diskAgrData4    /dev/mapper/diskDevBck2p1   /dev/mapper/diskGpvData3    /dev/mapper/diskHarData     /dev/mapper/diskRmnBckp1
/dev/mapper/diskAgrData4p1  /dev/mapper/diskDevBckp1    /dev/mapper/diskGpvData3p1  /dev/mapper/diskHarDatap1   /dev/mapper/diskRmnBin
/dev/mapper/diskAgrDatap1   /dev/mapper/diskDevBin      /dev/mapper/diskGpvDatap1   /dev/mapper/diskHisBck      /dev/mapper/diskRmnBinp1
/dev/mapper/diskCwtBck      /dev/mapper/diskDevBinp1    /dev/mapper/diskGriBck      /dev/mapper/diskHisBckp1    /dev/mapper/diskRmnData
/dev/mapper/diskCwtBck2     /dev/mapper/diskDevData     /dev/mapper/diskGriBck2     /dev/mapper/diskHisBin      /dev/mapper/diskRmnDatap1
/dev/mapper/diskCwtBck2p1   /dev/mapper/diskDevData2    /dev/mapper/diskGriBck2p1   /dev/mapper/diskHisBinp1
/dev/mapper/diskCwtBck3     /dev/mapper/diskDevData2p1  /dev/mapper/diskGriBckp1    /dev/mapper/diskHisData
/dev/mapper/diskCwtBck3p1   /dev/mapper/diskDevDatap1   /dev/mapper/diskGriBin      /dev/mapper/diskHisDatap1
====================================================================

To do that we use this command :

 EXCLUDE_DISK=`/usr/sbin/pvs | grep -v PV | grep -v vgsys | awk '{printf "%s|",  $1}'  | sed -e "s/|$//g" | sed -e "s/p[0-9]$//g" | sed -e "s/p[0-9]|/|/g"`

the result is:

 echo $EXCLUDE_DISK

/dev/mapper/diskAgrBck|/dev/mapper/diskAgrBin|/dev/mapper/diskAgrData2|/dev/mapper/diskAgrData3|/dev/mapper/diskAgrData4|/dev/mapper/diskAgrData|/dev/mapper/diskCwtBck2|/dev/mapper/diskCwtBck3|/dev/mapper/diskCwtBck|/dev/mapper/diskCwtBin|/dev/mapper/diskCwtData2|/dev/mapper/diskCwtData|/dev/mapper/diskDevBck2|/dev/mapper/diskDevBck|/dev/mapper/diskDevBin|/dev/mapper/diskDevData2|/dev/mapper/diskDevData|/dev/mapper/diskGpvBck2|/dev/mapper/diskGpvBck|/dev/mapper/diskGpvBin|/dev/mapper/diskGpvData2|/dev/mapper/diskGpvData3|/dev/mapper/diskGpvData|/dev/mapper/diskGriBck2|/dev/mapper/diskGriBck|/dev/mapper/diskGriBin|/dev/mapper/diskGriData2|/dev/mapper/diskGriData|/dev/mapper/diskHarBck|/dev/mapper/diskHarBin|/dev/mapper/diskHarData|/dev/mapper/diskHisBck|/dev/mapper/diskHisBin|/dev/mapper/diskHisData|/dev/mapper/diskIntBck|/dev/mapper/diskIntBin|/dev/mapper/diskIntData2|/dev/mapper/diskIntData|/dev/mapper/diskRmnBck|/dev/mapper/diskRmnBin|/dev/mapper/diskRmnData

 ls -all /dev/mapper/disk* | wc -l
82

mondoarchive crash with the following result:

[Main] libmondo-archive.c->call_mindi_to_supply_boot_disks#918: mindi   --custom /mondo/sh1orapl/tmp/mondo.tmp.5WlR2b /mondo/sh1orapl/SCRATCH/m
ondo.scratch.30106/mondo.scratch.12722/images '/boot/vmlinuz-2.6.18-274.el5' '' '0' 338086 'no' 'no' '' 'yes' 272 76 '/dev/mapper/diskAgrBck|/dev/mappe
r/diskAgrBin|/dev/mapper/diskAgrData2|/dev/mapper/diskAgrData3|/dev/mapper/diskAgrData4|/dev/mapper/diskAgrData|/dev/mapper/diskCwtBck2|/dev/mapper/dis
kCwtBck3|/dev/mapper/diskCwtBck|/dev/mapper/diskCwtBin|/dev/mapper/diskCwtData2|/dev/mapper/diskCwtData|/dev/mapper/diskDevBck2|/dev/mapper/diskDevBck|
/dev/mapper/diskDevBin|/dev/mapper/diskDevData2|/dev/mapper/diskDevData|/dev/mapper/diskGpvBck2|/dev/mapper/diskGpvBck|/dev/mapper/diskGpvBin|/dev/mapp
er/diskGpvData2|/dev/mapper/diskGpvData3|/dev/mapper/diskGpvData|/dev/mapper/diskGriBck2|/dev/mapper/diskGriBck|/dev/mapper/diskGriBin|/dev/mapper/disk
GriData2|/dev/mapper/diskGriData|/dev/mapper/diskHarBck|/dev/mapper/diskHarBin|/dev/mapper/diskHarData|/dev/mapper/diskHisBck|/dev/mapper/diskHisBin|/d
ev/mapper/diskHisData|/dev/mapper/diskIntBck|/dev/mapper/diskIntBin|/dev/mapper/diskIntData2|/dev/mapper/diskIntData|/dev/mapper/diskRmnBck|/dev/mapper
/diskRmnBin|/dev/mapper/diskRmnData' 'yes' 'no' 'no' 32768 0 'no'
SIGABRT signal received from OS
Abort - probably failed assertion. I'm sleeping for a few seconds so you can read the message.
[Main] newt-specific.c->fatal_error#308: Fatal error received - 'MondoRescue is terminating in response to a signal from the OS'
                [Main] newt-specific.c->fatal_error#326: OK, I think I'm the main PID.
{{{

---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******** buffer overflow detected ***: /usr/sbin/mondoarchive terminated
======= Backtrace: =========
/lib64/libc.so.6(__chk_fail+0x2f)[0x3eeece807f]
/lib64/libc.so.6[0x3eeece74e9]
/lib64/libc.so.6(_IO_default_xsputn+0x94)[0x3eeec6e3e4]
/lib64/libc.so.6(_IO_vfprintf+0x3e13)[0x3eeec46653]
/lib64/libc.so.6(__vsprintf_chk+0x9d)[0x3eeece758d]
/lib64/libc.so.6(__sprintf_chk+0x80)[0x3eeece74d0]
/usr/sbin/mondoarchive[0x41eafb]
/usr/sbin/mondoarchive[0x40c42d]
/usr/sbin/mondoarchive[0x40d0c4]
/usr/sbin/mondoarchive[0x403749]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3eeec1d994]
/usr/sbin/mondoarchive[0x402b69]
======= Memory map: ========
00400000-00450000 r-xp 00000000 fd:00 1850813                            /usr/sbin/mondoarchive
0064f000-00651000 rw-p 0004f000 fd:00 1850813                            /usr/sbin/mondoarchive
00651000-00656000 rw-p 00651000 00:00 0
00850000-00851000 rw-p 00050000 fd:00 1850813                            /usr/sbin/mondoarchive
05f53000-06027000 rw-p 05f53000 00:00 0                                  [heap]
3001600000-3001612000 r-xp 00000000 fd:00 1846837                        /usr/lib64/libnewt.so.0.52.1
3001612000-3001811000 ---p 00012000 fd:00 1846837                        /usr/lib64/libnewt.so.0.52.1
3001811000-3001813000 rw-p 00011000 fd:00 1846837                        /usr/lib64/libnewt.so.0.52.1
3eee800000-3eee81c000 r-xp 00000000 fd:00 553025                         /lib64/ld-2.5.so
3eeea1c000-3eeea1d000 r--p 0001c000 fd:00 553025                         /lib64/ld-2.5.so
3eeea1d000-3eeea1e000 rw-p 0001d000 fd:00 553025                         /lib64/ld-2.5.so
3eeec00000-3eeed4e000 r-xp 00000000 fd:00 553026                         /lib64/libc-2.5.so
3eeed4e000-3eeef4e000 ---p 0014e000 fd:00 553026                         /lib64/libc-2.5.so
3eeef4e000-3eeef52000 r--p 0014e000 fd:00 553026                         /lib64/libc-2.5.so
3eeef52000-3eeef53000 rw-p 00152000 fd:00 553026                         /lib64/libc-2.5.so
3eeef53000-3eeef58000 rw-p 3eeef53000 00:00 0
3eef000000-3eef082000 r-xp 00000000 fd:00 553039                         /lib64/libm-2.5.so
3eef082000-3eef281000 ---p 00082000 fd:00 553039                         /lib64/libm-2.5.so
3eef281000-3eef282000 r--p 00081000 fd:00 553039                         /lib64/libm-2.5.so
3eef282000-3eef283000 rw-p 00082000 fd:00 553039                         /lib64/libm-2.5.so
3eef400000-3eef402000 r-xp 00000000 fd:00 553028                         /lib64/libdl-2.5.so
3eef402000-3eef602000 ---p 00002000 fd:00 553028                         /lib64/libdl-2.5.so
3eef602000-3eef603000 r--p 00002000 fd:00 553028                         /lib64/libdl-2.5.so
3eef603000-3eef604000 rw-p 00003000 fd:00 553028                         /lib64/libdl-2.5.so
3eef800000-3eef816000 r-xp 00000000 fd:00 553029                         /lib64/libpthread-2.5.so
3eef816000-3eefa15000 ---p 00016000 fd:00 553029                         /lib64/libpthread-2.5.so
3eefa15000-3eefa16000 r--p 00015000 fd:00 553029                         /lib64/libpthread-2.5.so
3eefa16000-3eefa17000 rw-p 00016000 fd:00 553029                         /lib64/libpthread-2.5.so
3eefa17000-3eefa1b000 rw-p 3eefa17000 00:00 0
3efde00000-3efde0d000 r-xp 00000000 fd:00 553046                         /lib64/libgcc_s-4.1.2-20080825.so.1
3efde0d000-3efe00d000 ---p 0000d000 fd:00 553046                         /lib64/libgcc_s-4.1.2-20080825.so.1
3efe00d000-3efe00e000 rw-p 0000d000 fd:00 553046                         /lib64/libgcc_s-4.1.2-20080825.so.1
3fd6000000-3fd60c2000 r-xp 00000000 fd:00 1827547                        /usr/lib64/libslang.so.2.0.6
3fd60c2000-3fd62c1000 ---p 000c2000 fd:00 1827547                        /usr/lib64/libslang.so.2.0.6
3fd62c1000-3fd62dc000 rw-p 000c1000 fd:00 1827547                        /usr/lib64/libslang.so.2.0.6
3fd62dc000-3fd630f000 rw-p 3fd62dc000 00:00 0
2abb1f0c0000-2abb1f0c3000 rw-p 2abb1f0c0000 00:00 0
2abb1f0dd000-2abb1f0e0000 rw-p 2abb1f0dd000 00:00 0
2abb1f0e0000-2abb1f115000 r--s 00000000 fd:02 877830                     /var/db/nscd/hosts
7fff6b8ac000-7fff6b8c7000 rw-p 7ffffffe3000 00:00 0                      [stack]
7fff6b901000-7fff6b904000 r-xp 7fff6b901000 00:00 0                      [vdso]
ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0                  [vsyscall]
...............]  24% done;  0:25 to go

---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*****...............]  25% done;  0:24 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******..............]  28% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******..............]  30% done;  0:21 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  31% done;  0:20 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  32% done;  0:19 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  33% done;  0:20 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  34% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  35% done;  0:24 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  36% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  37% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  38% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  39% done;  0:21 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  40% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  41% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  42% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  43% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  44% done;  0:21 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [***********.........]  53% done;  0:15 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*************.......]  62% done;  0:11 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [***************.....]  71% done;  0:08 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*****************...]  81% done;  0:04 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******************..]  90% done;  0:02 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********************]  98% done;  0:00 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********************]  99% done;  0:00 to go
---evalcall---E---
Your backup will probably occupy a single nfs. Maybe two.
Done.
Copying Mondo's core files to the scratch directory
Done.
Calling MINDI to create boot+data disks
Your boot loader is GRUB and it boots from /dev/cciss/c0d0
Boot loader version string: grub (GNU GRUB 0.97)
---evalcall---1--- Calling MINDI to create boot+data disk
---evalcall---2--- TASK:  [*...................]   3% done;  0:32 to go
---evalcall---E---
SIGABRT signal received from OS
Abort - probably failed assertion. I'm sleeping for a few seconds so you can rea
Fatal error... MondoRescue is terminating in response to a signal from the OS
---FATALERROR--- MondoRescue is terminating in response to a signal from the OS
If you require technical support, please contact the mailing list.
See http://www.mondorescue.org for details.
The list's members can help you, if you attach that file to your e-mail.
Log file: /var/log/mondoarchive.log
Mondo has aborted.
Execution run ended; result=254
Type 'less /var/log/mondoarchive.log' to see the output log
}}}
Could you help us, because we have try many solutions with bad results.

Thanks for your futur answer


#827 duplicate Kernel panic - not syncing VFS unable to mount root fs on unknown block(hd1,0) Bruno Cornec usama.tariq
Description

I received below message while restore mondorescue backup, When I boot from ISO. I am testing it on centos5 installed on virtualbox :

No filesystem could mount root, tried ext2 iso 9660 Kernel panic - not syncing VFS unable to mount root fs on unknown block(hd1,0)

Below is my syslinux/syslinux.cfg file content:

prompt 1 display message.txt F1 message.txt F2 boot1.txt F3 boot2.txt F4 pxe.txt default interactive timeout 300

label interactive

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964

interactive apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label expert

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 expert

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label compare

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 compare

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label iso

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 iso

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label nuke

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 nuke

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label isonuke

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 isonuke

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10


Kindly guide me how to get rid off this issue ? Moreover i have attached log files as well, please look into it. Below are the details of mondorescue versions:

Mindi-BusyBox v1.21.1-r3332 Mindi 3.0.2-r3578 Mondo-3.2.2-1.rhel5.x86_64 perl-ProjectBuilder 0.14.6-1

Regards, Usama

#826 duplicate Kernel panic - not syncing VFS unable to mount root fs on unknown block(hd1,0) Bruno Cornec usama.tariq
Description

I received below message while restore mondorescue backup, When I boot from ISO. I am testing it on centos5 installed on virtualbox :

No filesystem could mount root, tried ext2 iso 9660 Kernel panic - not syncing VFS unable to mount root fs on unknown block(hd1,0)

Below is my syslinux/syslinux.cfg file content:

prompt 1 display message.txt F1 message.txt F2 boot1.txt F3 boot2.txt F4 pxe.txt default interactive timeout 300

label interactive

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964

interactive apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label expert

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 expert

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label compare

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 compare

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label iso

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 iso

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label nuke

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 nuke

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label isonuke

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 isonuke

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10


Kindly guide me how to get rid off this issue ? Moreover i have attached log files as well, please look into it. Below are the details of mondorescue versions:

Mindi-BusyBox v1.21.1-r3332 Mindi 3.0.2-r3578 Mondo-3.2.2-1.rhel5.x86_64 perl-ProjectBuilder 0.14.6-1

Regards, Usama

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Batch Modify
Note: See TracBatchModify for help on using batch modify.
Note: See TracQuery for help on using queries.