The site that the google bots are looking up is HUGE- dozens of pages with hundreds of links. Google appears to be crawling the site at 5 minute intervals and all of this indexing (or whatever its doing) is taking up resources nad costing big money. I am afraid if I put a robots.txt file that the site will not come up as high on the google search page.
Any help or info? I have found plenty of info that tells me "this is what a web crawler is" but nothing goes past that.
User-agent:* Disallow:/cgi-bin/ Disallow:/privatefolder/ Disallow:/downloads/ Disallow:/important/members.html
is that all I need?
I have never used a robots.txt file.
I’m an absolute beginner. I want to be able to keep my site fresh and up to date with articles and images. And I’m assuming Wordpress can get the job done. The thing is my site was developed using xhtml and css. How can I integrate Wordpress for this site?
My site is hosted under Windows Platform. It support PHP, MySQL but do not support for mod_rewrite Apache module. I’ve been told by my host that I might need to transfer to Linux Platform.
Now what do I do? What is this mod_rewrite Apache module. What does it do? And what is my next plan? Do I move to a Linux platform as advised or can I still use Wordpress with the Windows platform?
1. How do seach engines find things about what a website is about besides useing the <meta> tag?
2. How can you get the top result on a search engine if your basic thing is put in. (Ex. Landmark Missionary Baptist Church, right now the (just about) only way to find out site is to type in a nearby location)
3. Is the only way that search engines find things are by robots and by people submitting URLs?
4. How does your URL affect the search results?
5. How much does it cost to pay search engines to be the first results?
6. Is there a simple way to submit your site to a whole bunch of search engines at once?
7. Is there any other ways besides the <meta> tag to add tags to your result?
If you want to add more, feel free!
Please tell me
I don’t know about other server-side technologies, but ASP.NET compiler provides the option of generating a complete application as a compiled DLL.
As the content is all compiled and is in binary, how will search bots be able to crawl through the content and index words, meta tags, etc?