Datenbank scheinbar kaputt, kann ich sie reparieren?

Hier finden YaCy User Hilfe wenn was nicht funktioniert oder anders funktioniert als man dachte. Bei offensichtlichen Fehlern diese bitte gleich in die Bugs (http://bugs.yacy.net) eintragen.
Forumsregeln
In diesem Forum geht es um Benutzungsprobleme und Anfragen für Hilfe. Wird dabei ein Bug identifiziert, wird der thread zur Bearbeitung in die Bug-Sektion verschoben. Wer hier also einen Thread eingestellt hat und ihn vermisst, wird ihn sicherlich in der Bug-Sektion wiederfinden.

Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Fr Aug 29, 2014 10:20 pm

Hallo,

irgendwann heute ist mein yacy auf meinem VPS abgeschmiert, im Log gibt es viele Stack Traces mit Too Many Open Files. Ich habe es jetzt neu gestartet, aber die Suche funktioniert nicht mehr, und ich bekomme (unabhängig davon, ob ich selbst suche oder nicht) viele viele NullPointerExceptions im Log. (Habe es jetzt gestoppt.)

Ich habe kein Backup von der Datenbank, dafür reicht mein Backup-Platz, den ich für den VPS habe, nicht aus. Wäre schade, wenn die Datenbank weg wäre, da waren zuletzt mehr als 26 Millionen Dokumente drin. Gibt es irgendeine Chance, sie zu reparieren?

Wenn nicht, kann ich sie irgendwie löschen und bei null anfangen, ohne meinen Peer komplett neu aufsetzen und konfigurieren zu müssen?

Danke, zottel
Zuletzt geändert von zottel am Fr Aug 29, 2014 11:56 pm, insgesamt 1-mal geändert.
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Fr Aug 29, 2014 11:09 pm

Ich habe jetzt das checkindex.sh-Skript in /usr/share/yacy/bin gefunden, aber es funktioniert nicht, weil es offenbar Java-Klassen in lib/ erwartet, aber lib/ gibt es nicht?

Es handelt sich um einen Debian-Server mit dem apt-Paket (aktuellste Version).

Da die Klassen hier anscheinend in /usr/share/java/yacy/ liegen, habe ich das Skript entsprechend angepasst.

Ausgabe:

Code: Alles auswählen
root@main:/usr/share/yacy/bin# ./checkindex.sh

NOTE: testing will be more thorough if you run java with '-ea:org.apache.lucene...', so assertions are enabled

Opening index @ DATA/INDEX/freeworld/SEGMENTS/solr_46/collection1/data/index/

ERROR: could not read any segments file in directory
org.apache.lucene.store.NoSuchDirectoryException: directory '/var/lib/yacy/INDEX/freeworld/SEGMENTS/solr_46/collection1/data/index' does not exist
   at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:218)
   at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:242)
   at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:802)
   at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:753)
   at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:453)
   at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:398)
   at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:2051)
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Fr Aug 29, 2014 11:34 pm

P.S.: Das sind 29 GB Datenbank, wäre echt schade, wenn sie verloren wäre. Wenn es Möglichkeiten gibt, ich habe kein Problem mit der Shell, oder unter Anleitung Befehle in irgendwelche DB-Clients einzugeben oder so. Wenn es mit etwas Aufwand möglich ist, die Datenbank zu retten, und jemand bereit ist, diesen Aufwand mit mir zu treiben, bin ich dabei. :-)

Oder kann ich wegen der P2P-Natur eh davon ausgehen, dass das alles auf irgendwelchen anderen yacys noch vorhanden ist? Dann bleibt die Frage, wie ich meine Datenbank lösche, ohne meine ganze Node neu aufsetzen zu müssen. clearindex.sh?
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Fr Aug 29, 2014 11:54 pm

Ah, ich musste im Skript auch solr_46 noch auf solr_4_9 ändern. Jetzt läuft der Test, wird aber wohl noch eine Weile dauern. Ich melde mich, wenn er durch ist.
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Sa Aug 30, 2014 12:29 am

Hm, der Index an sich ist anscheinend ok. Hier die komplette Ausgabe von checkindex.sh:
Code: Alles auswählen
root@main:/usr/share/yacy/bin# ./checkindex.sh

NOTE: testing will be more thorough if you run java with '-ea:org.apache.lucene...', so assertions are enabled

Opening index @ DATA/INDEX/freeworld/SEGMENTS/solr_4_9/collection1/data/index/

Segments file=segments_69ku numSegments=36 versions=[4.6 .. 4.9] format= userData={commitTimeMSec=1409219437043}
  1 of 36: name=_mdd docCount=9997159
    codec=Lucene46
    compound=false
    numFiles=11
    size (MB)=5,964.726
    diagnostics = {timestamp=1393874980058, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=4, source=merge, lucene.version=4.6.1 1560866 - mark - 2014-01-23 20:11:13, os.arch=amd64, mergeMaxNumSegments=1, java.version=1.7.0_25, java.vendor=Oracle Corporation}
    has deletions [delGen=2864]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [3089076 deleted docs]
    test: fields..............OK [76 fields]
    test: field norms.........OK [20 fields]
    test: terms, freq, prox...OK [39816301 terms; 432184047 terms/docs pairs; 351361134 tokens]
    test (ignoring deletes): terms, freq, prox...OK [57131214 terms; 632368575 terms/docs pairs; 523845129 tokens]
    test: stored fields.......OK [204698531 total field count; avg 29.632 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  2 of 36: name=_3dio docCount=8283691
    codec=Lucene46
    compound=true
    numFiles=4
    size (MB)=4,746.357
    diagnostics = {timestamp=1403513403555, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.8.1 1594670 - rmuir - 2014-05-14 19:22:52, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_55, java.vendor=Oracle Corporation}
    has deletions [delGen=1878]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [7354 deleted docs]
    test: fields..............OK [84 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [46897353 terms; 511169530 terms/docs pairs; 401787818 tokens]
    test (ignoring deletes): terms, freq, prox...OK [47420392 terms; 518997029 terms/docs pairs; 415880137 tokens]
    test: stored fields.......OK [241367927 total field count; avg 29.164 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  3 of 36: name=_57ho docCount=6256188
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=4,674.903
    diagnostics = {timestamp=1405125355941, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=11, java.version=1.7.0_55, java.vendor=Oracle Corporation}
    has deletions [delGen=1225]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [165823 deleted docs]
    test: fields..............OK [84 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [42643775 terms; 457055675 terms/docs pairs; 402135029 tokens]
    test (ignoring deletes): terms, freq, prox...OK [44033910 terms; 471838785 terms/docs pairs; 417573001 tokens]
    test: stored fields.......OK [214651639 total field count; avg 35.244 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  4 of 36: name=_8njw docCount=617736
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=645.746
    diagnostics = {timestamp=1408772594558, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=22, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=10, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=201]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [27569 deleted docs]
    test: fields..............OK [83 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [6744006 terms; 55929557 terms/docs pairs; 59894566 tokens]
    test (ignoring deletes): terms, freq, prox...OK [7018182 terms; 58089640 terms/docs pairs; 61845294 tokens]
    test: stored fields.......OK [26733170 total field count; avg 45.298 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  5 of 36: name=_8wb5 docCount=4219240
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=3,416.359
    diagnostics = {timestamp=1409142459753, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=12, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=628]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [13431 deleted docs]
    test: fields..............OK [83 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [32476988 terms; 325132969 terms/docs pairs; 283056093 tokens]
    test (ignoring deletes): terms, freq, prox...OK [33671658 terms; 342526425 terms/docs pairs; 311197131 tokens]
    test: stored fields.......OK [162268616 total field count; avg 38.582 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  6 of 36: name=_93i4 docCount=107026
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=510.657
    diagnostics = {timestamp=1409216609206, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=18, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=11, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=3]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [3 deleted docs]
    test: fields..............OK [81 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [4568758 terms; 29828071 terms/docs pairs; 46747311 tokens]
    test (ignoring deletes): terms, freq, prox...OK [4568840 terms; 29829874 terms/docs pairs; 46751954 tokens]
    test: stored fields.......OK [15984561 total field count; avg 149.356 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  7 of 36: name=_8z6u docCount=26109
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=273.137
    diagnostics = {timestamp=1409171758974, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=15, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=11, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=48]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [2060 deleted docs]
    test: fields..............OK [81 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [2346054 terms; 15873903 terms/docs pairs; 22383718 tokens]
    test (ignoring deletes): terms, freq, prox...OK [2508779 terms; 17698618 terms/docs pairs; 24849892 tokens]
    test: stored fields.......OK [6722403 total field count; avg 279.529 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  8 of 36: name=_91rz docCount=92991
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=206
    diagnostics = {timestamp=1409198665869, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=19, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=11, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=53]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [854 deleted docs]
    test: fields..............OK [81 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [2467343 terms; 12097670 terms/docs pairs; 15280614 tokens]
    test (ignoring deletes): terms, freq, prox...OK [2579716 terms; 13055888 terms/docs pairs; 16645861 tokens]
    test: stored fields.......OK [6712259 total field count; avg 72.851 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  9 of 36: name=_8ygj docCount=52663
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=388.738
    diagnostics = {timestamp=1409165501672, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=155]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [14221 deleted docs]
    test: fields..............OK [81 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [1244788 terms; 10163295 terms/docs pairs; 13647565 tokens]
    test (ignoring deletes): terms, freq, prox...OK [3380432 terms; 23969656 terms/docs pairs; 34596570 tokens]
    test: stored fields.......OK [4359518 total field count; avg 113.405 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  10 of 36: name=_8yur docCount=17751
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=318.285
    diagnostics = {timestamp=1409168626046, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=14, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=11, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=42]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [350 deleted docs]
    test: fields..............OK [80 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [3461246 terms; 18774105 terms/docs pairs; 28009517 tokens]
    test (ignoring deletes): terms, freq, prox...OK [3506430 terms; 19166528 terms/docs pairs; 28609434 tokens]
    test: stored fields.......OK [9507088 total field count; avg 546.353 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  11 of 36: name=_91it docCount=15587
    codec=Lucene49
    compound=true
    numFiles=4
    size (MB)=208.189
    diagnostics = {timestamp=1409195615593, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=20, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=11, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    has deletions [delGen=28]
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK [101 deleted docs]
    test: fields..............OK [80 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [2089011 terms; 12185583 terms/docs pairs; 18235972 tokens]
    test (ignoring deletes): terms, freq, prox...OK [2108178 terms; 12321982 terms/docs pairs; 18464071 tokens]
    test: stored fields.......OK [6307097 total field count; avg 407.277 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  12 of 36: name=_93jp docCount=524
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.103
    diagnostics = {timestamp=1409217130387, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [73 fields]
    test: field norms.........OK [15 fields]
    test: terms, freq, prox...OK [22628 terms; 61489 terms/docs pairs; 70734 tokens]
    test: stored fields.......OK [37571 total field count; avg 71.7 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  13 of 36: name=_93ld docCount=390
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.18
    diagnostics = {timestamp=1409217660005, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [72 fields]
    test: field norms.........OK [16 fields]
    test: terms, freq, prox...OK [22674 terms; 71843 terms/docs pairs; 76181 tokens]
    test: stored fields.......OK [38898 total field count; avg 99.738 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  14 of 36: name=_93lw docCount=578
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=2.607
    diagnostics = {timestamp=1409217868911, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [77 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [55076 terms; 119722 terms/docs pairs; 205560 tokens]
    test: stored fields.......OK [65219 total field count; avg 112.836 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  15 of 36: name=_93m7 docCount=402
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.626
    diagnostics = {timestamp=1409218011522, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [80 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [42342 terms; 78434 terms/docs pairs; 120763 tokens]
    test: stored fields.......OK [37933 total field count; avg 94.361 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  16 of 36: name=_93l2 docCount=458
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.662
    diagnostics = {timestamp=1409217569418, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [73 fields]
    test: field norms.........OK [17 fields]
    test: terms, freq, prox...OK [29380 terms; 97923 terms/docs pairs; 108602 tokens]
    test: stored fields.......OK [55890 total field count; avg 122.031 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  17 of 36: name=_93jy docCount=387
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.284
    diagnostics = {timestamp=1409217205726, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [72 fields]
    test: field norms.........OK [16 fields]
    test: terms, freq, prox...OK [24587 terms; 71552 terms/docs pairs; 84738 tokens]
    test: stored fields.......OK [40949 total field count; avg 105.811 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  18 of 36: name=_93ki docCount=319
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.341
    diagnostics = {timestamp=1409217388416, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [73 fields]
    test: field norms.........OK [17 fields]
    test: terms, freq, prox...OK [28252 terms; 59877 terms/docs pairs; 115995 tokens]
    test: stored fields.......OK [46310 total field count; avg 145.172 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  19 of 36: name=_93nk docCount=384
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.541
    diagnostics = {timestamp=1409219124263, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [75 fields]
    test: field norms.........OK [19 fields]
    test: terms, freq, prox...OK [34652 terms; 68631 terms/docs pairs; 146668 tokens]
    test: stored fields.......OK [36672 total field count; avg 95.5 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  20 of 36: name=_93ks docCount=350
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.545
    diagnostics = {timestamp=1409217463967, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [73 fields]
    test: field norms.........OK [17 fields]
    test: terms, freq, prox...OK [31937 terms; 82243 terms/docs pairs; 105008 tokens]
    test: stored fields.......OK [45736 total field count; avg 130.674 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  21 of 36: name=_93lm docCount=391
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=2.274
    diagnostics = {timestamp=1409217750613, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [76 fields]
    test: field norms.........OK [20 fields]
    test: terms, freq, prox...OK [44408 terms; 112971 terms/docs pairs; 184206 tokens]
    test: stored fields.......OK [58691 total field count; avg 150.105 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  22 of 36: name=_93lj docCount=66
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.119
    diagnostics = {timestamp=1409217720204, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [77 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [29786 terms; 45723 terms/docs pairs; 101975 tokens]
    test: stored fields.......OK [20039 total field count; avg 303.621 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  23 of 36: name=_93n0 docCount=115
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.676
    diagnostics = {timestamp=1409218481257, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [75 fields]
    test: field norms.........OK [19 fields]
    test: terms, freq, prox...OK [38045 terms; 64445 terms/docs pairs; 130412 tokens]
    test: stored fields.......OK [30794 total field count; avg 267.774 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  24 of 36: name=_93ll docCount=67
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.18
    diagnostics = {timestamp=1409217750593, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [76 fields]
    test: field norms.........OK [20 fields]
    test: terms, freq, prox...OK [29734 terms; 51893 terms/docs pairs; 103727 tokens]
    test: stored fields.......OK [23351 total field count; avg 348.522 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  25 of 36: name=_93na docCount=89
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.117
    diagnostics = {timestamp=1409218961823, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [77 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [29682 terms; 49117 terms/docs pairs; 95509 tokens]
    test: stored fields.......OK [25643 total field count; avg 288.124 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  26 of 36: name=_93nv docCount=82
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.468
    diagnostics = {timestamp=1409219288969, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [76 fields]
    test: field norms.........OK [20 fields]
    test: terms, freq, prox...OK [34667 terms; 59010 terms/docs pairs; 162033 tokens]
    test: stored fields.......OK [33218 total field count; avg 405.098 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  27 of 36: name=_93mg docCount=142
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=1.717
    diagnostics = {timestamp=1409218089669, os=Linux, os.version=3.2.0-4-amd64, mergeFactor=10, source=merge, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [77 fields]
    test: field norms.........OK [21 fields]
    test: terms, freq, prox...OK [43985 terms; 78475 terms/docs pairs; 155589 tokens]
    test: stored fields.......OK [34668 total field count; avg 244.141 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  28 of 36: name=_93nu docCount=1
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.004
    diagnostics = {timestamp=1409219294459, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [18 fields]
    test: field norms.........OK [1 fields]
    test: terms, freq, prox...OK [18 terms; 18 terms/docs pairs; 0 tokens]
    test: stored fields.......OK [18 total field count; avg 18 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  29 of 36: name=_93nw docCount=7
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.057
    diagnostics = {timestamp=1409219315909, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [64 fields]
    test: field norms.........OK [12 fields]
    test: terms, freq, prox...OK [1393 terms; 1858 terms/docs pairs; 2189 tokens]
    test: stored fields.......OK [1063 total field count; avg 151.857 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  30 of 36: name=_93nx docCount=3
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.005
    diagnostics = {timestamp=1409219321906, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [18 fields]
    test: field norms.........OK [1 fields]
    test: terms, freq, prox...OK [34 terms; 54 terms/docs pairs; 0 tokens]
    test: stored fields.......OK [54 total field count; avg 18 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  31 of 36: name=_93ny docCount=8
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.019
    diagnostics = {timestamp=1409219344624, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [41 fields]
    test: field norms.........OK [8 fields]
    test: terms, freq, prox...OK [454 terms; 1019 terms/docs pairs; 934 tokens]
    test: stored fields.......OK [246 total field count; avg 30.75 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  32 of 36: name=_93nz docCount=3
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.012
    diagnostics = {timestamp=1409219349594, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [34 fields]
    test: field norms.........OK [8 fields]
    test: terms, freq, prox...OK [232 terms; 262 terms/docs pairs; 202 tokens]
    test: stored fields.......OK [97 total field count; avg 32.333 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  33 of 36: name=_93o0 docCount=8
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.006
    diagnostics = {timestamp=1409219380864, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [18 fields]
    test: field norms.........OK [1 fields]
    test: terms, freq, prox...OK [53 terms; 144 terms/docs pairs; 0 tokens]
    test: stored fields.......OK [144 total field count; avg 18 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  34 of 36: name=_93o1 docCount=1
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.005
    diagnostics = {timestamp=1409219381068, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [18 fields]
    test: field norms.........OK [1 fields]
    test: terms, freq, prox...OK [18 terms; 18 terms/docs pairs; 0 tokens]
    test: stored fields.......OK [18 total field count; avg 18 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  35 of 36: name=_93o2 docCount=10
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.006
    diagnostics = {timestamp=1409219431197, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [18 fields]
    test: field norms.........OK [1 fields]
    test: terms, freq, prox...OK [60 terms; 180 terms/docs pairs; 0 tokens]
    test: stored fields.......OK [180 total field count; avg 18 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

  36 of 36: name=_93o3 docCount=1
    codec=Lucene49
    compound=true
    numFiles=3
    size (MB)=0.005
    diagnostics = {timestamp=1409219437064, os=Linux, os.version=3.2.0-4-amd64, source=flush, lucene.version=4.9.0 1604085 - rmuir - 2014-06-20 06:22:23, os.arch=amd64, java.version=1.7.0_65, java.vendor=Oracle Corporation}
    no deletions
    test: open reader.........OK
    test: check integrity.....OK
    test: check live docs.....OK
    test: fields..............OK [18 fields]
    test: field norms.........OK [1 fields]
    test: terms, freq, prox...OK [18 terms; 18 terms/docs pairs; 0 tokens]
    test: stored fields.......OK [18 total field count; avg 18 fields per doc]
    test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc]
    test: docvalues...........OK [0 docvalues fields; 0 BINARY; 0 NUMERIC; 0 SORTED; 0 SORTED_NUMERIC; 0 SORTED_SET]

No problems were detected with this index.


Also ein anderes Problem?

Ich habe yacy jetzt nochmal hochgefahren, hier die ersten paar Probleme:

Code: Alles auswählen
I 2014/08/30 01:18:06 org.apache.solr.rest.ManagedResourceStorage Reading _rest_managed.json using file:dir=/usr/share/yacy/DATA/INDEX/freeworld/SEGMENTS/solr_4_9/webgraph/conf
W 2014/08/30 01:18:06 org.apache.solr.rest.ManagedResource No stored data found for /rest/managed
W 2014/08/30 01:18:06 org.apache.solr.rest.ManagedResource No registered observers for /rest/managed
I 2014/08/30 01:18:06 org.apache.solr.rest.RestManager Initializing 0 registered ManagedResources
E 2014/08/30 01:18:17 org.apache.solr.update.SolrIndexWriter SolrIndexWriter was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!
E 2014/08/30 01:18:17 org.apache.solr.core.SolrCore Error loading core:java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at org.apache.solr.core.CoreContainer.load(CoreContainer.java:301)
        at org.apache.solr.core.CoreContainer.createAndLoad(CoreContainer.java:176)
        at net.yacy.cora.federate.solr.instance.EmbeddedInstance.<init>(EmbeddedInstance.java:82)
        at net.yacy.search.index.Fulltext.connectLocalSolr(Fulltext.java:133)
        at net.yacy.search.Switchboard.<init>(Switchboard.java:518)
        at net.yacy.yacy.startup(yacy.java:191)
        at net.yacy.yacy.main(yacy.java:683)
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.util.fst.BytesStore.<init>(BytesStore.java:68)
        at org.apache.lucene.util.fst.FST.<init>(FST.java:373)
        at org.apache.lucene.util.fst.FST.<init>(FST.java:308)
        at org.apache.lucene.codecs.blocktree.FieldReader.<init>(FieldReader.java:85)
        at org.apache.lucene.codecs.blocktree.BlockTreeTermsReader.<init>(BlockTreeTermsReader.java:191)
        at org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat.fieldsProducer(Lucene41PostingsFormat.java:441)
        at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:197)
        at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:254)
        at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
        at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:107)
        at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143)
        at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:237)
        at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:98)
        at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:394)
        at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:112)
        at org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:41)
        at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1526)
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1672)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:840)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:643)
        at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

I 2014/08/30 01:18:18 SolrEmbeddedInstance detected default solr core: collection1
W 2014/08/30 01:18:18 ConcurrentLog java.io.IOException: cannot get the default core; available = 14651016, free = 14651016
java.io.IOException: cannot get the default core; available = 14651016, free = 14651016
        at net.yacy.cora.federate.solr.instance.EmbeddedInstance.<init>(EmbeddedInstance.java:92)
        at net.yacy.search.index.Fulltext.connectLocalSolr(Fulltext.java:133)
        at net.yacy.search.Switchboard.<init>(Switchboard.java:518)
        at net.yacy.yacy.startup(yacy.java:191)
        at net.yacy.yacy.main(yacy.java:683)
E 2014/08/30 01:18:18 org.apache.solr.core.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@2e352f85 (collection1) has a reference count of 1

Danach sieht es eine Weile gut aus, dann:
Code: Alles auswählen
W 2014/08/30 01:19:00 ConcurrentLog java.lang.reflect.InvocationTargetException
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at net.yacy.http.servlets.YaCyDefaultServlet.invokeServlet(YaCyDefaultServlet.java:655)
        at net.yacy.http.servlets.YaCyDefaultServlet.handleTemplate(YaCyDefaultServlet.java:811)
        at net.yacy.http.servlets.YaCyDefaultServlet.doGet(YaCyDefaultServlet.java:317)
        at net.yacy.http.servlets.YaCyDefaultServlet.doPost(YaCyDefaultServlet.java:379)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:553)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at net.yacy.http.CrashProtectionHandler.handle(CrashProtectionHandler.java:33)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:485)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:290)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:606)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:535)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
        at net.yacy.search.index.Fulltext.getLoadTime(Fulltext.java:491)
        at transferRWI.respond(transferRWI.java:239)
        ... 31 more
W 2014/08/30 01:19:00 org.eclipse.jetty.servlet.ServletHandler
javax.servlet.ServletException: /usr/share/yacy/htroot/yacy/transferRWI.html
        at net.yacy.http.servlets.YaCyDefaultServlet.handleTemplate(YaCyDefaultServlet.java:815)
        at net.yacy.http.servlets.YaCyDefaultServlet.doGet(YaCyDefaultServlet.java:317)
        at net.yacy.http.servlets.YaCyDefaultServlet.doPost(YaCyDefaultServlet.java:379)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:553)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at net.yacy.http.CrashProtectionHandler.handle(CrashProtectionHandler.java:33)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:485)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:290)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:606)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:535)
        at java.lang.Thread.run(Thread.java:745)

Dann, kurze Zeit später:
Code: Alles auswählen
W 2014/08/30 01:19:27 ConcurrentLog java.lang.reflect.InvocationTargetException
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at net.yacy.kelondro.workflow.InstantBusyThread.job(InstantBusyThread.java:107)
        at net.yacy.kelondro.workflow.AbstractBusyThread.run(AbstractBusyThread.java:190)
Caused by: java.lang.NullPointerException
        at net.yacy.search.index.Fulltext.getLoadTime(Fulltext.java:491)
        at net.yacy.peers.Transmission$Chunk.add(Transmission.java:179)
        at net.yacy.peers.Dispatcher.enqueueContainersToBuffer(Dispatcher.java:287)
        at net.yacy.peers.Dispatcher.selectContainersEnqueueToBuffer(Dispatcher.java:323)
        at net.yacy.search.Switchboard.dhtTransferJob(Switchboard.java:3452)
        ... 6 more
W 2014/08/30 01:19:27 ConcurrentLog java.lang.NullPointerException
java.lang.NullPointerException
        at net.yacy.search.index.Fulltext.getLoadTime(Fulltext.java:491)
        at net.yacy.peers.Transmission$Chunk.add(Transmission.java:179)
        at net.yacy.peers.Dispatcher.enqueueContainersToBuffer(Dispatcher.java:287)
        at net.yacy.peers.Dispatcher.selectContainersEnqueueToBuffer(Dispatcher.java:323)
        at net.yacy.search.Switchboard.dhtTransferJob(Switchboard.java:3452)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at net.yacy.kelondro.workflow.InstantBusyThread.job(InstantBusyThread.java:107)
        at net.yacy.kelondro.workflow.AbstractBusyThread.run(AbstractBusyThread.java:190)
W 2014/08/30 01:19:27 ConcurrentLog java.lang.NullPointerException java.lang.NullPointerException
        at net.yacy.search.index.Fulltext.getLoadTime(Fulltext.java:491)        at net.yacy.peers.Transmission$Chunk.add(Transmission.java:179)
        at net.yacy.peers.Dispatcher.enqueueContainersToBuffer(Dispatcher.java:287)
        at net.yacy.peers.Dispatcher.selectContainersEnqueueToBuffer(Dispatcher.ja
va:323)
        at net.yacy.search.Switchboard.dhtTransferJob(Switchboard.java:3452)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
rImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:606)
        at net.yacy.kelondro.workflow.InstantBusyThread.job(InstantBusyThread.java:107)
        at net.yacy.kelondro.workflow.AbstractBusyThread.run(AbstractBusyThread.java:190)E 2014/08/30 01:19:27 BUSYTHREAD Runtime Error in serverInstantThread.job, thread
'BusyThread net.yacy.search.Switchboard.dhtTransferJob': null; target exception: n
ull
java.lang.NullPointerException
        at net.yacy.search.index.Fulltext.getLoadTime(Fulltext.java:491)        at net.yacy.peers.Transmission$Chunk.add(Transmission.java:179)
        at net.yacy.peers.Dispatcher.enqueueContainersToBuffer(Dispatcher.java:287)
        at net.yacy.peers.Dispatcher.selectContainersEnqueueToBuffer(Dispatcher.ja
va:323)
        at net.yacy.search.Switchboard.dhtTransferJob(Switchboard.java:3452)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
rImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:606)
        at net.yacy.kelondro.workflow.InstantBusyThread.job(InstantBusyThread.java:107)
        at net.yacy.kelondro.workflow.AbstractBusyThread.run(AbstractBusyThread.java:190)

Und so ähnlich geht es dann munter weiter.
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Sa Aug 30, 2014 12:42 am

Wegen dem OutOfMemoryError am Anfang: Ich dachte, ich hätte yacy mal 3.5G Speicher via Oberfläche zugesprochen, kann aber sein, dass das noch bei meiner alten Installation war. In /usr/share/yacy/defaults/yacy.init habe ich unter javastart_Xmx aber nur Xmx600m gefunden. Das habe ich jetzt mal testweise noch auf Xmx3500m erhöht, das führt dann zu folgendem Ergebnis:

Code: Alles auswählen
I 2014/08/30 01:44:32 org.apache.solr.rest.ManagedResourceStorage Reading _rest_ma
naged.json using file:dir=/usr/share/yacy/DATA/INDEX/freeworld/SEGMENTS/solr_4_9/w
ebgraph/conf
W 2014/08/30 01:44:32 org.apache.solr.rest.ManagedResource No stored data found fo
r /rest/managed
W 2014/08/30 01:44:32 org.apache.solr.rest.ManagedResource No registered observers
for /rest/managedI 2014/08/30 01:44:32 org.apache.solr.rest.RestManager Initializing 0 registered M
anagedResourcesE 2014/08/30 01:44:36 org.apache.solr.core.CoreContainer Unable to create core: co
llection1
org.apache.solr.common.SolrException: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:868)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:643)
        at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja
va:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.j
ava:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error Instantiating Update Handle
r, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHa
ndler
        at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:561)
        at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:617)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:830)
        ... 10 moreCaused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:547)
        ... 12 more
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.solr.update.TransactionLog.<init>(TransactionLog.java:154)
        at org.apache.solr.update.UpdateLog.init(UpdateLog.java:261)
        at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:134)
        at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:94)
        at org.apache.solr.update.DirectUpdateHandler2.<init>(DirectUpdateHandler2.java:100)
        ... 17 more
E 2014/08/30 01:44:36 org.apache.solr.core.CoreContainer null:org.apache.solr.common.SolrException: Unable to create core: collection1
        at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:911)
        at org.apache.solr.core.CoreContainer.create(CoreContainer.java:568)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:868)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:643)
        at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556)
        ... 8 more
Caused by: org.apache.solr.common.SolrException: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
        at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:561)
        at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:617)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:830)
        ... 10 moreCaused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructor
AccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCon
structorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:547)
        ... 12 more
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.solr.update.TransactionLog.<init>(TransactionLog.java:154)
        at org.apache.solr.update.UpdateLog.init(UpdateLog.java:261)        at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:134)
        at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:94)
        at org.apache.solr.update.DirectUpdateHandler2.<init>(DirectUpdateHandler2
.java:100)
        ... 17 moreI 2014/08/30 01:44:36 SolrEmbeddedInstance detected default solr core: collection1
E 2014/08/30 01:44:36 STARTUP YaCy cannot start: SolrCore 'collection1' is not available due to init failure: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
org.apache.solr.common.SolrException: SolrCore 'collection1' is not available due
to init failure: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
        at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:753)
        at net.yacy.cora.federate.solr.instance.EmbeddedInstance.<init>(EmbeddedIn
stance.java:89)
        at net.yacy.search.index.Fulltext.connectLocalSolr(Fulltext.java:133)
        at net.yacy.search.Switchboard.<init>(Switchboard.java:518)        at net.yacy.yacy.startup(yacy.java:191)        at net.yacy.yacy.main(yacy.java:683)
Caused by: org.apache.solr.common.SolrException: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:868)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:643)
        at org.apache.solr.core.CoreContainer.create(CoreContainer.java:556)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:261)
        at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:253)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error Instantiating Update Handler, solr.DirectUpdateHandler2 failed to instantiate org.apache.solr.update.UpdateHandler
        at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:561)
        at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:617)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:830)
        ... 10 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:547)
        ... 12 more
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.solr.update.TransactionLog.<init>(TransactionLog.java:154)
        at org.apache.solr.update.UpdateLog.init(UpdateLog.java:261)
        at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:134)
        at org.apache.solr.update.UpdateHandler.<init>(UpdateHandler.java:94)
        at org.apache.solr.update.DirectUpdateHandler2.<init>(DirectUpdateHandler2.java:100)
        ... 17 more

… und der letzte Fehler noch einmal wiederholt.

Hat jemand eine Idee, was ich da tun könnte?
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon flegno » Sa Aug 30, 2014 4:23 am

Hallo zottel,
zottel hat geschrieben:Hat jemand eine Idee, was ich da tun könnte?
ich habe gestern ein Paar Stunden früher als du auch so ziemlich ähnliches durchgemacht - auf einem Windows-System. Meine YaCy-Instanz läuft wieder. Habe dazu einen Beitrag gepostet YaCy nach dem PC-Absturz kaputt, was kann ich machen?
flegno
 
Beiträge: 232
Registriert: So Aug 17, 2014 4:23 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon Orbiter » Sa Aug 30, 2014 9:45 am

Das hier ist ein Fehler innerhalb von Solr wo ich auch ratlos bin. Hinweise finden sich unter http://wiki.apache.org/solr/SolrPerform ... #Java_Heap aber da steht auch nur dass man Xmx hoch setzen soll. Meine Erfahrung mit Solr ist eher so, dass es wieder zu Problemen kommt wenn man das Xmx so weit hoch setzt, dass das OS dann nicht mehr viel RAM hat. Probiere das so zu setzen, dass das OS mindestens 1/3 des Gesamtspeichers übrig hat.
Orbiter
 
Beiträge: 5792
Registriert: Di Jun 26, 2007 10:58 pm
Wohnort: Frankfurt am Main

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Sa Aug 30, 2014 12:07 pm

Ich hab Xmx jetzt mal auf 7500m gesetzt und Java somit praktisch den kompletten RAM gegeben. Leider immer noch das gleiche.

Ziemlich seltsam, der Server lief bis vorgestern ja mit 600m problemlos.

Ich habe auf meiner Suche irgendwo gelesen (ohne damit selbst etwas anfangen zu können), dass immer alle Fields in den Speicher passen müssen. Bei der Ausgabe von checkindex.sh oben sehe ich Teile mit durchschnittlich mehr als 500 fields/document. Bedeutet das irgendwas? :-)

Wenn das jetzt nicht behebbar ist, wie kriege ich den Index am besten gelöscht?
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon flegno » Sa Aug 30, 2014 12:34 pm

Orbiter hat geschrieben:Meine Erfahrung mit Solr ist eher so, dass es wieder zu Problemen kommt wenn man das Xmx so weit hoch setzt, dass das OS dann nicht mehr viel RAM hat. Probiere das so zu setzen, dass das OS mindestens 1/3 des Gesamtspeichers übrig hat.

zottel hat geschrieben:Ich hab Xmx jetzt mal auf 7500m gesetzt und Java somit praktisch den kompletten RAM gegeben. Leider immer noch das gleiche.
Ziemlich seltsam, der Server lief bis vorgestern ja mit 600m problemlos.

Ich habe die Aussage von Orbiter so verstanden, dass es zu Problemen kommen kann, wenn man Xmx zu hoch setzt, also zu wenig Speicher für das OS bleibt.
flegno
 
Beiträge: 232
Registriert: So Aug 17, 2014 4:23 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » So Aug 31, 2014 11:31 am

Ja, wenn das OS nicht mehr genug Speicher hat und somit das große Swappen beginnt. Ich habe aber alles andere Relevante (speziell Webserver) vorher gestoppt, und da hat nichts geswappt.
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Di Sep 02, 2014 10:07 pm

Nochmal die Frage: Gibt es einen einfachen Weg, den Index zu löschen? clearindex.sh macht ja einen API-Call, und wenn der solr nicht läuft, geht das wohl eher nicht. Ich habe versucht, DATA/INDEX/ einfach mal in INDEX.old umzubennen. Dann wird zwar ein neuer Index angelegt, aber die Web-Oberfläche meines yacy antwortet nicht.
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon sixcooler » Di Sep 02, 2014 10:11 pm

Hallo,

wenn du es so hart willst versuche es mit DATA/INDEX/freeworld/SEGMENTS.

cu, sixcooler.
sixcooler
 
Beiträge: 494
Registriert: Do Aug 14, 2008 5:22 pm

Re: Datenbank scheinbar kaputt, kann ich sie reparieren?

Beitragvon zottel » Mi Sep 03, 2014 8:10 pm

Danke, das hat besser funktioniert.

Ich dachte erst, dass es immer noch nicht geht, bis ich schließlich gemerkt habe, dass yacy sich mal wieder selbständig auf Port 8090 zurückgesetzt hatte. Das ist mir schonmal passiert. Hmpf. Und nicht einmal die Änderung des Ports in yacy.init half dagegen. :evil:

Na gut. Jetzt läuft mein Peer wieder. Ohne die 27 GB Index, die er mal hatte. Schade, aber was will man machen.
zottel
 
Beiträge: 51
Registriert: Mi Jan 16, 2013 3:04 pm


Zurück zu Fragen und Antworten

Wer ist online?

Mitglieder in diesem Forum: 0 Mitglieder und 2 Gäste