New USA User, many questions: interface/memory/crash...

Keine Scheu, hier darf alles gefragt und diskutiert werden. Das ist das Forum für YaCy-Anfänger. Hier kann man 'wo muss man klicken' fragen und sich über Grundlagen zur Suchmaschinentechnik unterhalten.
Forumsregeln
Hier werden Fragen beantwortet und wir versuchen die Probleme von YaCy-Newbies zu klären. Bitte beantwortete Fragen im YaCy-Wiki http://wiki.yacy.de dokumentieren!

New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Do Mär 28, 2013 1:49 am

Deutsch: New USA User, viele Fragen: Schnittstelle / Speicher / crash ...

Hello and greetings this is my first post. I hope you can understand English or use Google Translate to read my text! If not, should I post a copy of my text in German so each user can read it? I have not found much english here, but the other English forum I found yacy-forum.org has no developers!!!
Hallo und Grüße dies ist mein erster Beitrag. Ich hoffe, Sie können Englisch verstehen oder verwenden Sie Google Translate, um meinen Text zu lesen! Wenn nicht, sollte ich eine Kopie meines Textes in deutscher Sprache, so dass jeder Benutzer sie lesen kann? Ich habe nicht gefunden, viel Englisch hier, aber die anderen englischen Forum fand ich yacy-forum.org hat keine Entwicklern!!!


About my Setup
I run a 24/7/365 server in my home as all good hackers must do. This server runs on Proxmox VE, a very good virtualization project. System is custom, E3-1230 Xeon CPU with 32GB RAM, RAID10 4x1TB for "images" and RAID6 10x2TB for "storage." A good use for this system would be to help YaCy, what a great project more people should get involved in the USA or English speaking countries, very little data/help it seems!

I started my first YaCy install (killswitch_US_East) here, in an OpenVZ container with 320GB hard disk space and 4GB of RAM. It uses a virtual appliance image, which is Ubuntu Server 12.04 32-bit. YaCy is installed with Debian repository. I have found out that I cannot run more than 1400 MB Java heap on this OS, pity and now it seems prone to crash, or sometimes the crawler will pause for no reason. I had remote crawler tuned ON to help the network, but now have turned that off to see if it can stop crashing... I move heap variable up and down 800-1400 MB, also try to reduce default RAM cache from 50000 to 5000 to help possible running out of memory errors? We will see if this will make it stop crashing. In the beginning with low Word (RWI) and URL counts, it seemed more stable. It is almost 8,000,000 URL/RWI now. (8.000.000 for Europe :D )

Next I figure that YaCy must be hungry for much more memory due to crashing, so I start a new VM using KVM and 64 bit Ubuntu 12.04.2 ISO amd64. This system also gets 320GB disk and 4GB RAM for now. YaCy installed again using Debian repository. I find right away that 1400 MB heap limit is gone as expected on 64 bit, great! I have set this one to 3000 MB heap for now we will see how it likes it. This is all default for now except larger Java heap. It is senior/crawler/indexer in freeworld as "killswitch_US_East2".

Interface Questions

I have wondered about the graphs mostly. The Administration -> Peer Control -> Admin Console -> Status graph:
Bild

This is from my new YaCy "killswitch_US_east2". What is the mess of peer names on it for? Why does the indexing cache jump up and down - why not just stay full at 50k? It is hard to read!

This: Administration -> Monitoring -> YaCy Network:
Bild

What a BEAUTY! But, some questions of course. The peers are represented by colored dots or solid circles if you wish - what does the color AND size mean exactly? Size - Links? Words? Also, my peer is different size in MY graph compared to size in http://www.yacystats.de/! I am smaller in this website, my manhood is threatened! :lol: There are three colors for peer: Blue, Green, and grey-ish or brown - I have no idea (confused) what these colors indicate. I know that I am red (looks brown), but what of those other three colors? I have watched the good tutorial videos, but these questions are not answered I do not think.

Administration -> Monitoring -> YaCy Network -> Active Peers:
Bild

I understand the first three Info indicators Prinicpal/Senior, accept crawl, and DHT Receive, no problem. But what is meant by the fourth one - "Node Candidate"?

Memory Optimize and Crashing

I have the problem with the 32 bit heap limit as stated before I think this is a problem for larger database sizes. Is there a recommended heap vs. Links/RWI size? I think I pushed it too hard with the low memory limit with remote crawling AND DHT indexing, too much load/not enough memory (heap)? Other variables I can change/optimize to match the heap size I select?

I hope the new 64 bit KVM system and large 3000 MB heap will do better for crawl + DHT indexing. Any recommended optimizations for heap or other variable to help with the crashing or queue pausing I have had? The queue pausing had a notice under Admin Console -> Advanced Properties right after "62_remotetriggeredcrawl_isPaused" there was a variable that had a URL and said something to the effect of SOLR database couldn't be contacted or something like it; the message has disappeared after a YaCy restart.

Sorry for a long first post, I have more questions but I will wait and see for now!
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Do Mär 28, 2013 3:11 am

Unfortunately the new KVM YaCy just experience the 'crawler pause' issue, so it must not be memory related. The error is under Admin Console -> Advanced Properties. It says:

62_remotetriggeredcrawl_isPaused_cause: failed to send http://www.washingtonpost.com/world/the ... story.html to solr

Something causes solr to not respond or time out. If you hit "resume' it starts right off again, so solr is still working fine, just seems to hiccup every so often and cause the queue pausing. It pauses both the local and remote queues when this happens, even though the remote queue is the origination of the error.

Perhaps if this is some bug I should run an external solr instance instead of the built in one?
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Do Mär 28, 2013 4:45 pm

My OpenVZ 32 bit YaCy continues to crash regularly even with crawling turned off. The yacy00.log shows nothing interesting after this happens, so I'm not sure what it wrong with it. Last line in log is:

I 2013/03/28 11:36:42 net.yacy.cora.federate.solr.connector.SolrServerConnector 1 results for q=id:"nKjW0geFAaMZ"

Currently I have it set to 1400MB heap, at 800MB heap setting the memory filled up and it disabled DHT transfers, so basically just sat there doing nothing. This happened even with RAM cache set to low "5000" value instead of "50000."
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon sixcooler » Do Mär 28, 2013 6:28 pm

Hello killswitch,

welcome @YaCy - I think it is ok to stay at English,
the results from google-translate are unreadable.

Yes, YaCy needs a lot of RAM (heap) if the index grows.
And since we use solr for index-storage, there is also a heavy usage of virtual RAM.
Virtual RAM does not need to be available physically, but needs to be addressable - that's why 64Bit is the way to go.

The Peer-Names at the PerformanceGraph represent connections to other Peers.
I'm also at your opinion that this is unreadable for the freeworld - maybe it usable in other networks.

The green graph represent the word-cache, which gets flushed / written do disk every minute or when the Heap
runs low on Memory.

For the colors of the NetworkPicture, your should find a legend at the bottom of /Network.html.
The size of the peers in this graph means count of links, but is a very rough representation.

The info of node-candidates is currently a test - these are peers with good response-times and direct
connection the internet (not behind a NAT-router).

The word-cache is not as memory-hungry as it was is past. A max. of 50000 should be ok for you.

The embedded solr is quite new for us, so I can't say what are the limits for a 32bit system.
I recommend to switch to 64bit, if possible.

Cu, sixcooler.
sixcooler
 
Beiträge: 494
Registriert: Do Aug 14, 2008 5:22 pm

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Fr Mär 29, 2013 10:58 am

I perhaps pushed too hard on the poor 32 bit YaCy install. It filled up the index so full in only a few days with the crawling and indexing that it became unusable. I found that if I put it in Robinson mode, it is the only way to keep it from crashing over time. This is not good - in Robinson it no longer can search for me on the global network, so for an "end user" this case means YaCy is only good for a few days and then must be shut down. If not, all YaCy can do is serve remote search requests on a URL/RWI database that will never be updated (stale) and the local user can only search on the relatively small index stored locally - not ideal!

I have started a new test on this 32 bit - I have deleted all the data and index using the web interface Index Administration, and then deleted /etc/yacy/yacy.conf, so it will re-initialize with only the default values. It should run in this configuration, and function at a minimum be able to act as a local/global search portal for the end user. I will not use it as a web proxy just yet, as I believe any crawling could bring it to a similar fate of crashing too soon. This would be how a normal end user might install and use YaCy I would think, so it should be able to run OK forever in this way - if not, then there is a problem that needs addressed, some way to keep YaCy in default configuration from using up all it's memory and crashing. I hope it will not fail, and then I can attribute my tinkering as the cause of it going so bad on the first try. This long term test will show if YaCy can manage in the default 600MB heap space, which many users will probably just leave alone I would guess. It will only restart when the cron job pulls an update from the repository, otherwise it will run continuous for this test. I will restart it manually also if/when it crashes and save the logs for review.

I am still puzzled by this SOLR error that causes even my good 64-bit to pause the crawler queues at random. It is very infrequent and by the time I see it, it is usually old news, and the logs are long since wiped away. These YaCy logs are VERY verbose, and the default 20MB (20 files) gets over run in a hurry! If I get lucky, I will be "tail -f" watching when it happens someday. Otherwise, this is a most distressing problem with no easy solution I can find, except manually resume the queues when I visit the admin web console - much crawling time is lost in this regard.

The colors are not all listed on the network legend, well not in a easy to understand way on a few terms. Here is what I mean:

dark green font : senior/principal peers
light green font : passive peers
pink font : junior peers
red point : this peer

I see there is no reference to the color of the actual "circles" on the network graph - no mention of blue anywhere on the legend. Also, it mentions dark/light green, but I can only see one color green text on the graph, maybe it is too subtle for me to see. What I see are blue circles, green circles, and grey circles. It looks like all the grey ones are lined up with pink font labels, so they must be juniors I guess. Maybe they are really pink circles, but on the green ring they look grey...

Now there are two others, the blue and the bright green circles. The blue circles I take to mean senior, as most of those are the large ones, makes some sense. But the green ones, I am not sure. It is too jumbled, but it looks like they might also be junior too, like the pinks ones. I can not be sure what these colors mean. :?

Last question, how do you keep your RWI and URL index from going old and stale? I mean some sites one crawl is OK, but others, forum, news, etc. must be crawled all the time to gather new words and URL - how can this be done on the entire local index? I can see how to do it with a manual crawl, but what of the DHT-in entries - how can they be kept fresh? There must be a way that the index does not keep old entries forever?
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Fr Mär 29, 2013 11:54 am

I have also never been to a German forum before. How many Germans will speak English and understand me? 100%? 50%? 10%?

In the U.S., we can learn German in some universities, but unless we travel overseas, no one could use it! A lot of people learn Spanish, I had some in high school (9-12th grade level) that I can't remember. Spanish is good for speaking to our Mexican friends to the south. Also, a popular one here is French - but also not used much. I suppose if we went to Quebec, Canada it could be used. My brother tried to learn Mandarin Chinese in uni!! Hard work he said and no use also here for the most part, so it will be forgotten shortly without any use.

Lots of people here have one or two classes of a foreign language, but not too many are fluent in another second or third language. It does not see much use here I think for most people!
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Fr Mär 29, 2013 6:26 pm

The new 32-bit killswitch_US_East test is going along nicely.

It has accumulated 1,161,644 URL's and 165,316 RWI's in the first 8 hours, a rapid pace. This could be influenced by the fact that it was known to the network prior to the reset of the database/index, perhaps making it fill faster than a virgin client. The memory usage has been very stable at the default settings Xms=90 Xmx=600:

Memory Usage
free: 13.4 MB
total: 116 MB
max: 116 MB

Bild
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Fr Mär 29, 2013 7:09 pm

I have made another small tweak, as I found the logging throughput was nearing 1.5GB/day! That is a lot of logging. mostly INFO type things. I came to this measurement by "rough guess", as each yacyXX.log file is 1MB in size, and there was about 1 minute between each time stamp on the files.

I have done a find/replace inside yacy.logging to change all INFO to WARNING, we'll see how this does. It certainly will free up some I/O on the disks - I think the logging might have been a high percentage of total I/O but I am not too sure, has anyone else noticed this massive logging to be a problem, or is it normally OK? IT seems to be in excess of what a 'normal' program might do.

*Edit note:

I have found after restarting the processes to change the logging, that the maximum memory on our 32 bit friend has changed:

Memory Usage
free: 84.88 MB
total: 187.16 MB
max: 580 MB

Maybe during the first start when it copies yacy.init --> yacy.config it did not use the Heap Xmx setting... After the restart, I see it has taken effect. I think the Xmx variable must be in MB, and the "Memory Usage max:" must be in MiB, as they never seem to match exactly.
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Sa Mär 30, 2013 5:17 am

After reducing the logging verbosity, I was able to track down a recent queue pause event:

62_remotetriggeredcrawl_isPaused_cause : failed to send http://www.abc.net.au/melbourne/?ref=portal_m10 to solr

The log is full of this:

Code: Alles auswählen
W 2013/03/29 19:18:53 SOLR failed to send http://www.abc.net.au/local/sites/festivals/default.htm to solrorg.apache.solr.common.SolrException: com.spatial4j.core.exception.InvalidShapeExce$
E 2013/03/29 19:18:54 org.apache.solr.core.SolrCore org.apache.solr.common.SolrException: com.spatial4j.core.exception.InvalidShapeException: Invalid latitude: latitudes are range -90 to 9$
        at org.apache.solr.schema.LatLonType.createFields(LatLonType.java:70)
        at org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:193)
        at org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:269)
        at org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:73)
        at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:201)
        at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
        at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
        at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:481)
        at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:350)
        at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:246)
        at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
        at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
        at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
        at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1816)
        at org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:150)
        at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117)
        at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
        at net.yacy.cora.federate.solr.connector.SolrServerConnector.add(SolrServerConnector.java:176)
        at net.yacy.cora.federate.solr.connector.MirrorSolrConnector.add(MirrorSolrConnector.java:171)
        at net.yacy.cora.federate.solr.connector.CachedSolrConnector.add(CachedSolrConnector.java:224)
        at net.yacy.search.index.Fulltext.putDocument(Fulltext.java:418)
        at net.yacy.search.index.Segment.storeDocument(Segment.java:556)
        at net.yacy.search.Switchboard.storeDocumentIndex(Switchboard.java:2697)
        at net.yacy.search.Switchboard.storeDocumentIndex(Switchboard.java:2640)
        at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at net.yacy.kelondro.workflow.InstantBlockingThread.job(InstantBlockingThread.java:96)
        at net.yacy.kelondro.workflow.AbstractBlockingThread.run(AbstractBlockingThread.java:78)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:679)
Caused by: com.spatial4j.core.exception.InvalidShapeException: Invalid latitude: latitudes are range -90 to 90: provided lat: [153.0205]
        at com.spatial4j.core.io.ParseUtils.parseLatitudeLongitude(ParseUtils.java:139)
        at org.apache.solr.schema.LatLonType.createFields(LatLonType.java:68)
        ... 35 more


The URL is not identical, but it is from the same domain around the same time. The error occurs over and over again in the logs, always that same Latitude error with the digit 153.0205.

Other commonly recurring errors have to do with embedded "referer" URL's which seem to break the crawler:
Code: Alles auswählen
W 2013/03/29 20:14:15 StackTrace null
java.lang.IllegalArgumentException
        at java.net.URI.create(URI.java:859)
        at org.apache.http.client.methods.HttpGet.<init>(HttpGet.java:69)
        at net.yacy.cora.protocol.http.HTTPClient.GETbytes(HTTPClient.java:344)
        at net.yacy.crawler.retrieval.HTTPLoader.load(HTTPLoader.java:136)
        at net.yacy.crawler.retrieval.HTTPLoader.load(HTTPLoader.java:182)
        at net.yacy.crawler.retrieval.HTTPLoader.load(HTTPLoader.java:182)
        at net.yacy.crawler.retrieval.HTTPLoader.load(HTTPLoader.java:182)
        at net.yacy.crawler.retrieval.HTTPLoader.load(HTTPLoader.java:76)
        at net.yacy.repository.LoaderDispatcher.loadInternal(LoaderDispatcher.java:279)
        at net.yacy.repository.LoaderDispatcher.load(LoaderDispatcher.java:162)
        at net.yacy.repository.LoaderDispatcher.load(LoaderDispatcher.java:148)
        at net.yacy.crawler.data.CrawlQueues$Loader.run(CrawlQueues.java:660)
Caused by: java.net.URISyntaxException: Illegal character in query at index 297: https://twitter.com/intent/session?original_referer=http://pesn.com/2012/08/31/9602173_Keshe_Foundation_Pro$
        at java.net.URI$Parser.fail(URI.java:2825)
        at java.net.URI$Parser.checkChars(URI.java:2998)
        at java.net.URI$Parser.parseHierarchical(URI.java:3088)
        at java.net.URI$Parser.parse(URI.java:3030)
        at java.net.URI.<init>(URI.java:595)
        at java.net.URI.create(URI.java:857)
        ... 11 more


Another earlier pause was caused here:
Code: Alles auswählen
W 2013/03/29 20:15:23 SOLR failed to send http://www.abc.net.au/adelaide/ to solrorg.apache.solr.common.SolrException: com.spatial4j.core.exception.InvalidShapeException: Invalid latitude:$
W 2013/03/29 20:15:24 SOLR failed to send http://www.abc.net.au/adelaide/ to solr, pausing Crawler!
W 2013/03/29 20:15:24 SWITCHBOARD Crawl job '50_localcrawl' is paused: failed to send http://www.abc.net.au/adelaide/ to solr
W 2013/03/29 20:15:24 SWITCHBOARD Crawl job '62_remotetriggeredcrawl' is paused: failed to send http://www.abc.net.au/adelaide/ to solr


This happened after the same repetitive "Invalid Latitude" errors as shown above - eventually is pauses the queue although I do not understand why it would care about a Latitude value enough to stop the crawler.

The only other repetitive errors I get are minor I believe and of no concern:
Code: Alles auswählen
W 2013/03/29 23:52:20 YACY yacyClient.queryRemoteCrawlURLs error asking peer 'proteo':java.io.IOException: Client can't execute: Read timed out
W 2013/03/29 23:52:21 YACY yacyClient.crawlReceipt error:Client can't execute: Timeout waiting for connection from pool
.
.
.
W 2013/03/29 23:52:28 YACY Received 1/1 double URLs from peer QeBVdlzNGU-q:_anonufe-28333482-243/1.00008136


Assume the top two are just hitting timeout values set, no problem, and the bottom one the URL must already be in the local index. I am glad to have more information for you regarding queue pausing, and even a little error on URL parsing that might be fixable!

*Edit note: Please inform if (any) should be reported as actual bugs - I do not know enough about the intended behavior of YaCy yet to figure out what is a bug and what is supposed to be happening!
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Fr Apr 05, 2013 11:39 pm

The default* YaCy 32 bit install test is complete. Approximately 7 days after starting the node, the memory was "full" as the graph below shows. Performing a single search for "freedom" crashed the web interface within a minute or two, and it was no longer responsive. The java task is still running since I performed these tests yesterday, but the web interface is inaccessible. I'm not sure if the process is doing anything useful, the logs are just filling up with:

E 2013/04/05 18:22:35 BUSYTHREAD Runtime Error in serverInstantThread.job, thread 'net.yacy.search.Switchboard.cleanupJob': null; target exception: null
.
.
W 2013/04/05 18:27:36 StackTrace null
java.lang.reflect.InvocationTargetException

These messages appear repetitively in the log file since yesterday.

Bild

Bild

Since this is the behavior of the default install, what can be done to prevent the memory from filling but still be able to search in freeworld? If I turn off DHT-In, I cannot search freeworld (remote search) any longer, is this correct? Is there any way to keep YaCy from outgrowing a small memory footprint? How do you keep it running on the Raspberry Pi continuously, for instance? My "low memory" 32 bit test only worked for a week with only passive use (no local searching or proxy)..?
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA

Re: New USA User, many questions: interface/memory/crash...

Beitragvon killswitch » Fr Apr 05, 2013 11:46 pm

Also I wanted to report how to fix the crawler pause! So far, the only way I know how to fix is to blacklist the domain(s) that cause trouble for the crawler.

I take the domain (host) from the variable (62_remotetriggeredcrawl_isPaused_cause) on Advanced Properties, and the put it in as a URL filter on the Filter & Blacklists page.

So far I have blacklisted:
*.abc.net.au/*
*.chem.cmu.edu/*
*.washingtonpost.com/*

This does a good job at preventing the crawler pausing. Unfortunately, it means I do not index those sites at all!
killswitch
 
Beiträge: 10
Registriert: Mi Mär 27, 2013 11:56 pm
Wohnort: Ohio, USA


Zurück zu Hilfe für Einsteiger und Anwender

Wer ist online?

Mitglieder in diesem Forum: 0 Mitglieder und 2 Gäste