wiki:TracQuery

Version 4 (modified by trac, 8 months ago) (diff)

--

Trac Ticket Queries

In addition to reports, Trac provides support for custom ticket queries, which can be used to display tickets that meet specified criteria.

To configure and execute a custom query, switch to the View Tickets module from the navigation bar, and select the Custom Query link.

Filters

When you first go to the query page, the default filter will display tickets relevant to you:

  • If logged in then all open tickets, it will display open tickets assigned to you.
  • If not logged in but you have specified a name or email address in the preferences, then it will display all open tickets where your email (or name if email not defined) is in the CC list.
  • If not logged in and no name/email is defined in the preferences, then all open issues are displayed.

Current filters can be removed by clicking the button to the left with the minus sign on the label. New filters are added from the dropdown lists at the bottom corners of the filters box; 'And' conditions on the left, 'Or' conditions on the right. Filters with either a text box or a dropdown menu of options can be added multiple times to perform an Or on the criteria.

You can use the fields just below the filters box to group the results based on a field, or display the full description for each ticket.

After you have edited your filters, click the Update button to refresh your results.

Clicking on one of the query results will take you to that ticket. You can navigate through the results by clicking the Next Ticket or Previous Ticket links just below the main menu bar, or click the Back to Query link to return to the query page.

You can safely edit any of the tickets and continue to navigate through the results using the Next/Previous/Back to Query links after saving your results. When you return to the query any tickets which were edited will be displayed with italicized text. If one of the tickets was edited such that it no longer matches the query criteria , the text will also be greyed. Lastly, if a new ticket matching the query criteria has been created, it will be shown in bold.

The query results can be refreshed and cleared of these status indicators by clicking the Update button again.

Saving Queries

Trac allows you to save the query as a named query accessible from the reports module. To save a query ensure that you have Updated the view and then click the Save query button displayed beneath the results. You can also save references to queries in Wiki content, as described below.

Note: one way to easily build queries like the ones below, you can build and test the queries in the Custom report module and when ready - click Save query. This will build the query string for you. All you need to do is remove the extra line breaks.

Note: you must have the REPORT_CREATE permission in order to save queries to the list of default reports. The Save query button will only appear if you are logged in as a user that has been granted this permission. If your account does not have permission to create reports, you can still use the methods below to save a query.

You may want to save some queries so that you can come back to them later. You can do this by making a link to the query from any Wiki page.

[query:status=new|assigned|reopened&version=1.0 Active tickets against 1.0]

Which is displayed as:

Active tickets against 1.0

This uses a very simple query language to specify the criteria, see Query Language.

Alternatively, you can copy the query string of a query and paste that into the Wiki link, including the leading ? character:

[query:?status=new&status=assigned&status=reopened&group=owner Assigned tickets by owner]

Which is displayed as:

Assigned tickets by owner

Customizing the table format

You can also customize the columns displayed in the table format (format=table) by using col=<field>. You can specify multiple fields and what order they are displayed in by placing pipes (|) between the columns:

[[TicketQuery(max=3,status=closed,order=id,desc=1,format=table,col=resolution|summary|owner|reporter)]]

This is displayed as:

Results (1 - 3 of 653)

1 2 3 4 5 6 7 8 9 10 11
Ticket Resolution Summary Owner Reporter
#831 duplicate Not able to boot using mondo/mindi bruno rachanarao123
#828 fixed pb with exclude bruno sm1971fr
#827 duplicate Kernel panic - not syncing VFS unable to mount root fs on unknown block(hd1,0) bruno usama.tariq
1 2 3 4 5 6 7 8 9 10 11

Full rows

In table format you can also have full rows by using rows=<field>:

[[TicketQuery(max=3,status=closed,order=id,desc=1,format=table,col=resolution|summary|owner|reporter,rows=description)]]

This is displayed as:

Results (1 - 3 of 653)

1 2 3 4 5 6 7 8 9 10 11
Ticket Resolution Summary Owner Reporter
#831 duplicate Not able to boot using mondo/mindi bruno rachanarao123
Description

Hello,

I am trying to restore a centos-6 32 bit system using nuke/interactive/expert option. However, system hangs after loading initrd images and I am not able to proceed further. I am able to install and take a backup of system successfully . I tried using mindi.iso images and behaviour is no different! Any help would be appreciated. I have attached mindi and mondo logs as well as screen shot of the error.

Thanks and Regards, Rachana

#828 fixed pb with exclude bruno sm1971fr
Description

Hi every one,

We have some trouble with mondoarchive. Our system is RHEL 5.11, the server we want to backup is connected to the backup server with nfs.

The packages we use :

afio-2.5-1.rhel5.x86_64.rpm            
buffer-1.19-4.rhel5.x86_64.rpm            
mindi-busybox-1.18.5-1.rhel5.x86_64.rpm
afio-debuginfo-2.5-1.rhel5.x86_64.rpm  
buffer-debuginfo-1.19-4.rhel5.x86_64.rpm  
mondo-3.0.2-1.rhel5.x86_64.rpm
mindi-2.1.3-1.rhel5.x86_64.rpm
/usr/sbin/mondoarchive -O -N -s 10000m -9 -n mondo01:/mondo -E "$EXCLUDE_DISK" -T ${DIR_HOST_TMP} -S ${DIR_HOST_SCRATCH} -p ${HOSTNAME} -d ${HOSTNAME}" > /mondo/${HOSTNAME}/mondoarchive2.cmd

we don't want to backup the following FS:

ls /dev/mapper/disk*
/dev/mapper/diskAgrBck      /dev/mapper/diskCwtBckp1    /dev/mapper/diskGpvBck      /dev/mapper/diskGriBinp1    /dev/mapper/diskIntBck
/dev/mapper/diskAgrBckp1    /dev/mapper/diskCwtBin      /dev/mapper/diskGpvBck2     /dev/mapper/diskGriData     /dev/mapper/diskIntBckp1
/dev/mapper/diskAgrBin      /dev/mapper/diskCwtBinp1    /dev/mapper/diskGpvBck2p1   /dev/mapper/diskGriData2    /dev/mapper/diskIntBin
/dev/mapper/diskAgrBinp1    /dev/mapper/diskCwtData     /dev/mapper/diskGpvBckp1    /dev/mapper/diskGriData2p1  /dev/mapper/diskIntBinp1
/dev/mapper/diskAgrData     /dev/mapper/diskCwtData2    /dev/mapper/diskGpvBin      /dev/mapper/diskGriDatap1   /dev/mapper/diskIntData
/dev/mapper/diskAgrData2    /dev/mapper/diskCwtData2p1  /dev/mapper/diskGpvBinp1    /dev/mapper/diskHarBck      /dev/mapper/diskIntData2
/dev/mapper/diskAgrData2p1  /dev/mapper/diskCwtDatap1   /dev/mapper/diskGpvData     /dev/mapper/diskHarBckp1    /dev/mapper/diskIntData2p1
/dev/mapper/diskAgrData3    /dev/mapper/diskDevBck      /dev/mapper/diskGpvData2    /dev/mapper/diskHarBin      /dev/mapper/diskIntDatap1
/dev/mapper/diskAgrData3p1  /dev/mapper/diskDevBck2     /dev/mapper/diskGpvData2p1  /dev/mapper/diskHarBinp1    /dev/mapper/diskRmnBck
/dev/mapper/diskAgrData4    /dev/mapper/diskDevBck2p1   /dev/mapper/diskGpvData3    /dev/mapper/diskHarData     /dev/mapper/diskRmnBckp1
/dev/mapper/diskAgrData4p1  /dev/mapper/diskDevBckp1    /dev/mapper/diskGpvData3p1  /dev/mapper/diskHarDatap1   /dev/mapper/diskRmnBin
/dev/mapper/diskAgrDatap1   /dev/mapper/diskDevBin      /dev/mapper/diskGpvDatap1   /dev/mapper/diskHisBck      /dev/mapper/diskRmnBinp1
/dev/mapper/diskCwtBck      /dev/mapper/diskDevBinp1    /dev/mapper/diskGriBck      /dev/mapper/diskHisBckp1    /dev/mapper/diskRmnData
/dev/mapper/diskCwtBck2     /dev/mapper/diskDevData     /dev/mapper/diskGriBck2     /dev/mapper/diskHisBin      /dev/mapper/diskRmnDatap1
/dev/mapper/diskCwtBck2p1   /dev/mapper/diskDevData2    /dev/mapper/diskGriBck2p1   /dev/mapper/diskHisBinp1
/dev/mapper/diskCwtBck3     /dev/mapper/diskDevData2p1  /dev/mapper/diskGriBckp1    /dev/mapper/diskHisData
/dev/mapper/diskCwtBck3p1   /dev/mapper/diskDevDatap1   /dev/mapper/diskGriBin      /dev/mapper/diskHisDatap1
====================================================================

To do that we use this command :

 EXCLUDE_DISK=`/usr/sbin/pvs | grep -v PV | grep -v vgsys | awk '{printf "%s|",  $1}'  | sed -e "s/|$//g" | sed -e "s/p[0-9]$//g" | sed -e "s/p[0-9]|/|/g"`

the result is:

 echo $EXCLUDE_DISK

/dev/mapper/diskAgrBck|/dev/mapper/diskAgrBin|/dev/mapper/diskAgrData2|/dev/mapper/diskAgrData3|/dev/mapper/diskAgrData4|/dev/mapper/diskAgrData|/dev/mapper/diskCwtBck2|/dev/mapper/diskCwtBck3|/dev/mapper/diskCwtBck|/dev/mapper/diskCwtBin|/dev/mapper/diskCwtData2|/dev/mapper/diskCwtData|/dev/mapper/diskDevBck2|/dev/mapper/diskDevBck|/dev/mapper/diskDevBin|/dev/mapper/diskDevData2|/dev/mapper/diskDevData|/dev/mapper/diskGpvBck2|/dev/mapper/diskGpvBck|/dev/mapper/diskGpvBin|/dev/mapper/diskGpvData2|/dev/mapper/diskGpvData3|/dev/mapper/diskGpvData|/dev/mapper/diskGriBck2|/dev/mapper/diskGriBck|/dev/mapper/diskGriBin|/dev/mapper/diskGriData2|/dev/mapper/diskGriData|/dev/mapper/diskHarBck|/dev/mapper/diskHarBin|/dev/mapper/diskHarData|/dev/mapper/diskHisBck|/dev/mapper/diskHisBin|/dev/mapper/diskHisData|/dev/mapper/diskIntBck|/dev/mapper/diskIntBin|/dev/mapper/diskIntData2|/dev/mapper/diskIntData|/dev/mapper/diskRmnBck|/dev/mapper/diskRmnBin|/dev/mapper/diskRmnData

 ls -all /dev/mapper/disk* | wc -l
82

mondoarchive crash with the following result:

[Main] libmondo-archive.c->call_mindi_to_supply_boot_disks#918: mindi   --custom /mondo/sh1orapl/tmp/mondo.tmp.5WlR2b /mondo/sh1orapl/SCRATCH/m
ondo.scratch.30106/mondo.scratch.12722/images '/boot/vmlinuz-2.6.18-274.el5' '' '0' 338086 'no' 'no' '' 'yes' 272 76 '/dev/mapper/diskAgrBck|/dev/mappe
r/diskAgrBin|/dev/mapper/diskAgrData2|/dev/mapper/diskAgrData3|/dev/mapper/diskAgrData4|/dev/mapper/diskAgrData|/dev/mapper/diskCwtBck2|/dev/mapper/dis
kCwtBck3|/dev/mapper/diskCwtBck|/dev/mapper/diskCwtBin|/dev/mapper/diskCwtData2|/dev/mapper/diskCwtData|/dev/mapper/diskDevBck2|/dev/mapper/diskDevBck|
/dev/mapper/diskDevBin|/dev/mapper/diskDevData2|/dev/mapper/diskDevData|/dev/mapper/diskGpvBck2|/dev/mapper/diskGpvBck|/dev/mapper/diskGpvBin|/dev/mapp
er/diskGpvData2|/dev/mapper/diskGpvData3|/dev/mapper/diskGpvData|/dev/mapper/diskGriBck2|/dev/mapper/diskGriBck|/dev/mapper/diskGriBin|/dev/mapper/disk
GriData2|/dev/mapper/diskGriData|/dev/mapper/diskHarBck|/dev/mapper/diskHarBin|/dev/mapper/diskHarData|/dev/mapper/diskHisBck|/dev/mapper/diskHisBin|/d
ev/mapper/diskHisData|/dev/mapper/diskIntBck|/dev/mapper/diskIntBin|/dev/mapper/diskIntData2|/dev/mapper/diskIntData|/dev/mapper/diskRmnBck|/dev/mapper
/diskRmnBin|/dev/mapper/diskRmnData' 'yes' 'no' 'no' 32768 0 'no'
SIGABRT signal received from OS
Abort - probably failed assertion. I'm sleeping for a few seconds so you can read the message.
[Main] newt-specific.c->fatal_error#308: Fatal error received - 'MondoRescue is terminating in response to a signal from the OS'
                [Main] newt-specific.c->fatal_error#326: OK, I think I'm the main PID.
{{{

---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******** buffer overflow detected ***: /usr/sbin/mondoarchive terminated
======= Backtrace: =========
/lib64/libc.so.6(__chk_fail+0x2f)[0x3eeece807f]
/lib64/libc.so.6[0x3eeece74e9]
/lib64/libc.so.6(_IO_default_xsputn+0x94)[0x3eeec6e3e4]
/lib64/libc.so.6(_IO_vfprintf+0x3e13)[0x3eeec46653]
/lib64/libc.so.6(__vsprintf_chk+0x9d)[0x3eeece758d]
/lib64/libc.so.6(__sprintf_chk+0x80)[0x3eeece74d0]
/usr/sbin/mondoarchive[0x41eafb]
/usr/sbin/mondoarchive[0x40c42d]
/usr/sbin/mondoarchive[0x40d0c4]
/usr/sbin/mondoarchive[0x403749]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3eeec1d994]
/usr/sbin/mondoarchive[0x402b69]
======= Memory map: ========
00400000-00450000 r-xp 00000000 fd:00 1850813                            /usr/sbin/mondoarchive
0064f000-00651000 rw-p 0004f000 fd:00 1850813                            /usr/sbin/mondoarchive
00651000-00656000 rw-p 00651000 00:00 0
00850000-00851000 rw-p 00050000 fd:00 1850813                            /usr/sbin/mondoarchive
05f53000-06027000 rw-p 05f53000 00:00 0                                  [heap]
3001600000-3001612000 r-xp 00000000 fd:00 1846837                        /usr/lib64/libnewt.so.0.52.1
3001612000-3001811000 ---p 00012000 fd:00 1846837                        /usr/lib64/libnewt.so.0.52.1
3001811000-3001813000 rw-p 00011000 fd:00 1846837                        /usr/lib64/libnewt.so.0.52.1
3eee800000-3eee81c000 r-xp 00000000 fd:00 553025                         /lib64/ld-2.5.so
3eeea1c000-3eeea1d000 r--p 0001c000 fd:00 553025                         /lib64/ld-2.5.so
3eeea1d000-3eeea1e000 rw-p 0001d000 fd:00 553025                         /lib64/ld-2.5.so
3eeec00000-3eeed4e000 r-xp 00000000 fd:00 553026                         /lib64/libc-2.5.so
3eeed4e000-3eeef4e000 ---p 0014e000 fd:00 553026                         /lib64/libc-2.5.so
3eeef4e000-3eeef52000 r--p 0014e000 fd:00 553026                         /lib64/libc-2.5.so
3eeef52000-3eeef53000 rw-p 00152000 fd:00 553026                         /lib64/libc-2.5.so
3eeef53000-3eeef58000 rw-p 3eeef53000 00:00 0
3eef000000-3eef082000 r-xp 00000000 fd:00 553039                         /lib64/libm-2.5.so
3eef082000-3eef281000 ---p 00082000 fd:00 553039                         /lib64/libm-2.5.so
3eef281000-3eef282000 r--p 00081000 fd:00 553039                         /lib64/libm-2.5.so
3eef282000-3eef283000 rw-p 00082000 fd:00 553039                         /lib64/libm-2.5.so
3eef400000-3eef402000 r-xp 00000000 fd:00 553028                         /lib64/libdl-2.5.so
3eef402000-3eef602000 ---p 00002000 fd:00 553028                         /lib64/libdl-2.5.so
3eef602000-3eef603000 r--p 00002000 fd:00 553028                         /lib64/libdl-2.5.so
3eef603000-3eef604000 rw-p 00003000 fd:00 553028                         /lib64/libdl-2.5.so
3eef800000-3eef816000 r-xp 00000000 fd:00 553029                         /lib64/libpthread-2.5.so
3eef816000-3eefa15000 ---p 00016000 fd:00 553029                         /lib64/libpthread-2.5.so
3eefa15000-3eefa16000 r--p 00015000 fd:00 553029                         /lib64/libpthread-2.5.so
3eefa16000-3eefa17000 rw-p 00016000 fd:00 553029                         /lib64/libpthread-2.5.so
3eefa17000-3eefa1b000 rw-p 3eefa17000 00:00 0
3efde00000-3efde0d000 r-xp 00000000 fd:00 553046                         /lib64/libgcc_s-4.1.2-20080825.so.1
3efde0d000-3efe00d000 ---p 0000d000 fd:00 553046                         /lib64/libgcc_s-4.1.2-20080825.so.1
3efe00d000-3efe00e000 rw-p 0000d000 fd:00 553046                         /lib64/libgcc_s-4.1.2-20080825.so.1
3fd6000000-3fd60c2000 r-xp 00000000 fd:00 1827547                        /usr/lib64/libslang.so.2.0.6
3fd60c2000-3fd62c1000 ---p 000c2000 fd:00 1827547                        /usr/lib64/libslang.so.2.0.6
3fd62c1000-3fd62dc000 rw-p 000c1000 fd:00 1827547                        /usr/lib64/libslang.so.2.0.6
3fd62dc000-3fd630f000 rw-p 3fd62dc000 00:00 0
2abb1f0c0000-2abb1f0c3000 rw-p 2abb1f0c0000 00:00 0
2abb1f0dd000-2abb1f0e0000 rw-p 2abb1f0dd000 00:00 0
2abb1f0e0000-2abb1f115000 r--s 00000000 fd:02 877830                     /var/db/nscd/hosts
7fff6b8ac000-7fff6b8c7000 rw-p 7ffffffe3000 00:00 0                      [stack]
7fff6b901000-7fff6b904000 r-xp 7fff6b901000 00:00 0                      [vdso]
ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0                  [vsyscall]
...............]  24% done;  0:25 to go

---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*****...............]  25% done;  0:24 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******..............]  28% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******..............]  30% done;  0:21 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  31% done;  0:20 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  32% done;  0:19 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  33% done;  0:20 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  34% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*******.............]  35% done;  0:24 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  36% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  37% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  38% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  39% done;  0:21 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********............]  40% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  41% done;  0:23 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  42% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  43% done;  0:22 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*********...........]  44% done;  0:21 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [***********.........]  53% done;  0:15 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*************.......]  62% done;  0:11 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [***************.....]  71% done;  0:08 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [*****************...]  81% done;  0:04 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [******************..]  90% done;  0:02 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********************]  98% done;  0:00 to go
---evalcall---E---
---evalcall---1---      Dividing filelist into sets
---evalcall---2--- TASK:  [********************]  99% done;  0:00 to go
---evalcall---E---
Your backup will probably occupy a single nfs. Maybe two.
Done.
Copying Mondo's core files to the scratch directory
Done.
Calling MINDI to create boot+data disks
Your boot loader is GRUB and it boots from /dev/cciss/c0d0
Boot loader version string: grub (GNU GRUB 0.97)
---evalcall---1--- Calling MINDI to create boot+data disk
---evalcall---2--- TASK:  [*...................]   3% done;  0:32 to go
---evalcall---E---
SIGABRT signal received from OS
Abort - probably failed assertion. I'm sleeping for a few seconds so you can rea
Fatal error... MondoRescue is terminating in response to a signal from the OS
---FATALERROR--- MondoRescue is terminating in response to a signal from the OS
If you require technical support, please contact the mailing list.
See http://www.mondorescue.org for details.
The list's members can help you, if you attach that file to your e-mail.
Log file: /var/log/mondoarchive.log
Mondo has aborted.
Execution run ended; result=254
Type 'less /var/log/mondoarchive.log' to see the output log
}}}
Could you help us, because we have try many solutions with bad results.

Thanks for your futur answer


#827 duplicate Kernel panic - not syncing VFS unable to mount root fs on unknown block(hd1,0) bruno usama.tariq
Description

I received below message while restore mondorescue backup, When I boot from ISO. I am testing it on centos5 installed on virtualbox :

No filesystem could mount root, tried ext2 iso 9660 Kernel panic - not syncing VFS unable to mount root fs on unknown block(hd1,0)

Below is my syslinux/syslinux.cfg file content:

prompt 1 display message.txt F1 message.txt F2 boot1.txt F3 boot2.txt F4 pxe.txt default interactive timeout 300

label interactive

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964

interactive apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label expert

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 expert

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label compare

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 compare

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label iso

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 iso

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label nuke

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 nuke

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10 label isonuke

kernel /vmlinuz append initrd=/initrd.img root=/dev/ram0 rw ramdisk_size=182964 isonuke

apm=0ff devfs=nomount noresume selinux=0 barrier=off udevtimeout=10


Kindly guide me how to get rid off this issue ? Moreover i have attached log files as well, please look into it. Below are the details of mondorescue versions:

Mindi-BusyBox? v1.21.1-r3332 Mindi 3.0.2-r3578 Mondo-3.2.2-1.rhel5.x86_64 perl-ProjectBuilder? 0.14.6-1

Regards, Usama

1 2 3 4 5 6 7 8 9 10 11

Query Language

query: TracLinks and the [[TicketQuery]] macro both use a mini “query language” for specifying query filters. Filters are separated by ampersands (&). Each filter consists of the ticket field name, an operator and one or more values. More than one value are separated by a pipe (|), meaning that the filter matches any of the values. To include a literal & or | in a value, escape the character with a backslash (\).

The available operators are:

= the field content exactly matches one of the values
~= the field content contains one or more of the values
^= the field content starts with one of the values
$= the field content ends with one of the values

All of these operators can also be negated:

!= the field content matches none of the values
!~= the field content does not contain any of the values
!^= the field content does not start with any of the values
!$= the field content does not end with any of the values

The date fields created and modified can be constrained by using the = operator and specifying a value containing two dates separated by two dots (..). Either end of the date range can be left empty, meaning that the corresponding end of the range is open. The date parser understands a few natural date specifications like "3 weeks ago", "last month" and "now", as well as Bugzilla-style date specifications like "1d", "2w", "3m" or "4y" for 1 day, 2 weeks, 3 months and 4 years, respectively. Spaces in date specifications can be omitted to avoid having to quote the query string.

created=2007-01-01..2008-01-01 query tickets created in 2007
created=lastmonth..thismonth query tickets created during the previous month
modified=1weekago.. query tickets that have been modified in the last week
modified=..30daysago query tickets that have been inactive for the last 30 days

See also: TracTickets, TracReports, TracGuide, TicketQuery