well for all my keywords is the same.
only for 1 its going down.
i will investigate a few days to find out why..as allot of stuff was used for this keyword.
well for all my keywords is the same.
only for 1 its going down.
i will investigate a few days to find out why..as allot of stuff was used for this keyword.
Been there, done that...
At Mattcutts Blog there is one person talking about that there are a few spammy sites still ranking on top...
Matt Cutts says:
Source: Penguin 2.0 rolled out todaySeppo, we have some things coming later this summer that should help with the type of sites you mention, so I think you made the right choice to work on building authority.
---------- Post added 05-23-2013 at 04:08 PM ----------
Tim Grice says something very interessting.. I think you people have to know this:
Source: Early thoughts on Penguin 2.0 | Branded3Many sites were expecting to see link removal efforts when Penguin reran, however it is becoming clear that Penguin is not a penalty. There will be no magic recovery because you have removed links, this is an algorithm update, it isn’t Google saying ‘you’ve been naughty, we’re going to punish you’, this is Google saying ‘all that stuff you did to rank, we’ve just killed it, start again’. So unless you have managed to replace all the authority your low quality links were giving you, don’t expect to see your rankings come back.
Yes, well said by Tim.
People just be smart in doing SEO and you don't need to worry about any new updates.
Free Tool:
IMT Website Submitter(Indexer)
OMG, the SERP of some my site is down. I hope this is just a google dance.
But some website SERP is rank up to page 1.
Matt Cutts has tweet out that website operators should also participate in the evaluation of the second Penguin updates and can give a "Penguin Spam Report" feedback!
In case you missed it, we're taking Penguin spam reports at http://bit.ly/penguinspamreport… and digging into the feedback we get.Source: https://twitter.com/mattcutts/Here's a special spam report form: http://bit.ly/penguinspamreport… Please tell us about the spammy sites that Penguin missed.
You can make your penguin spam report here: https://docs.google.com/forms/d/1rhR...viewform?pli=1If you see a spam site that is still ranking after the latest Penguin webspam algorithm, please tell us more about it.
Have you already noticed any changes for webpages that were pushed with IMT Supercharged bookmarks? Is it still recommended to use them?
I found interessting stuff...
Somebody is spamming to matt cutts in a funny way...
then you can find in the link below a conversation with this guy and matt cutts about the penality in a forum.
Link: https://news.ycombinator.com/item?id=5426209
---------- Post added 05-24-2013 at 08:03 AM ----------
Dear Strlunga
You can still using IMT Tools and everything is fine!
And yes.. I recommend you to use these IMT Tools...
EVERYTHING IS SAFED!
Here is the proof, please read the post on this Thread: http://www.imtalk.org/f42/1589-imt-s...-sales-48.html
Thank you seobunny! I am currently trying IMT Bookmarks for the first time. And I was afraid that this is the wrong time to use them... I'm curious on the effect.
LoL... My blog becomes PageRank 3... I dont know when.. but its new for me.
4 Years ago my domain gets a PageRank penality because of spamtechniques. But it seems fine now after all these updates... after 4 years lol.
I found a Case Study about the looser domains from penguin 2.0 update.
Its very interessting and a very long case study.
YOU PEOPLE HAVE TO SPEND TIME AND READ IT ALL!!!
Link: Deep Dive into a Penguin 2.0 Victim - Penalty Analysis and lot's of spammy links - LRT Link Research Tools LRT Link Research Tools
My site was some what penalized(around 30%-40% traffic loss) by the first Penguin update but it looks like after some update that happened at the beginning of May recently and Penguin 2.0 my site was effected in a good way. It didn't get all the keywords back that it lost pre Penguin 1.0, but after Penguin 2.0 it got a new keyword and some of the old keywords it lost are now either close to the first page or close to the top 100. Looks like it isn't penalized for those specific keywords anymore.
I haven't done link building in nearly over a year. I did some link building post Penguin 1.0 just to try and make my site's link profile look less artificial by using the url, branded keyword and generic keyword anchor text, but not much happened after that. Other than that I just worked on improving the actual site and got a bunch of facebook likes/google pluses/addthis shares naturally.
Fortunately I didn't have to look for my links and request that they get deleted like I've heard a lot of other people did since the majority of the links I manually built Pre Penguin 1.0 were naturally deleted since they were either part of a network that got deindexed or part of a .edu blog or forum that the site quickly found out about post Penguin 1.0 and deleted themselves.
Another thing I noticed with my site after Penguin 1.0, when I google'd the description of my site it would show up on the second page. A few scrapers and web statistic sites were above it. Other sites had more authority over my own description than I did/where it originally came from. But finally after Penguin 2.0 my site is number 1 when I google the description like it should be. That gives me the impression that even though the link profile of my site or anyone else's site that was penalized may have been fixed and in good if not perfect condition for a long time but that wouldn't have mattered since they'd have to wait for a big update like Penguin 2.0 to happen before Google acknowledges that.
Saw some of my sites hit by Penguin 2.0, not drastically but I did notice that there is less occurrence of the same site in the top 100. Previously I had a few pages from one site listed in the top 100 listing now only the highest ranked page seems to be on. Also, the bookmarks I did here was actually listed but way back in page 10. Maybe still doing the dance.
There are tons of bot scraping and stealing your content!!! I really know your problem but there is a easy way to solve this!
You have to disallow a lot of bots by adding some lines in your robots.txt file.
Add these lines to your robots.txt file:
Code:User-agent: Alexibot Disallow: / User-agent: Aqua_Products Disallow: / User-agent: asterias Disallow: / User-agent: b2w/0.1 Disallow: / User-agent: BackDoorBot/1.0 Disallow: / User-agent: BlowFish/1.0 Disallow: / User-agent: Bookmark search tool Disallow: / User-agent: BotALot Disallow: / User-agent: BotRightHere Disallow: / User-agent: BuiltBotTough Disallow: / User-agent: Bullseye/1.0 Disallow: / User-agent: BunnySlippers Disallow: / User-agent: CheeseBot Disallow: / User-agent: CherryPicker Disallow: / User-agent: CherryPickerElite/1.0 Disallow: / User-agent: CherryPickerSE/1.0 Disallow: / User-agent: Copernic Disallow: / User-agent: CopyRightCheck Disallow: / User-agent: cosmos Disallow: / User-agent: Crescent Internet ToolPak HTTP OLE Control v.1.0 Disallow: / User-agent: Crescent Disallow: / User-agent: DittoSpyder Disallow: / User-agent: EmailCollector Disallow: / User-agent: EmailSiphon Disallow: / User-agent: EmailWolf Disallow: / User-agent: EroCrawler Disallow: / User-agent: ExtractorPro Disallow: / User-agent: FairAd Client Disallow: / User-agent: Flaming AttackBot Disallow: / User-agent: Foobot Disallow: / User-agent: Gaisbot Disallow: / User-agent: GetRight/4.2 Disallow: / User-agent: Harvest/1.5 Disallow: / User-agent: hloader Disallow: / User-agent: httplib Disallow: / User-agent: HTTrack 3.0 Disallow: / User-agent: humanlinks Disallow: / User-agent: InfoNaviRobot Disallow: / User-agent: Iron33/1.0.2 Disallow: / User-agent: JennyBot Disallow: / User-agent: Kenjin Spider Disallow: / User-agent: Keyword Density/0.9 Disallow: / User-agent: larbin Disallow: / User-agent: LexiBot Disallow: / User-agent: libWeb/clsHTTP Disallow: / User-agent: LinkextractorPro Disallow: / User-agent: LinkScan/8.1a Unix Disallow: / User-agent: LinkWalker Disallow: / User-agent: LNSpiderguy Disallow: / User-agent: lwp-trivial/1.34 Disallow: / User-agent: lwp-trivial Disallow: / User-agent: Mata Hari Disallow: / User-agent: Microsoft URL Control - 5.01.4511 Disallow: / User-agent: Microsoft URL Control - 6.00.8169 Disallow: / User-agent: Microsoft URL Control Disallow: / User-agent: MIIxpc/4.2 Disallow: / User-agent: MIIxpc Disallow: / User-agent: Mister PiX Disallow: / User-agent: moget/2.1 Disallow: / User-agent: moget Disallow: / User-agent: Mozilla/4.0 (compatible; BullsEye; Windows 95) Disallow: / User-agent: MSIECrawler Disallow: / User-agent: NetAnts Disallow: / User-agent: NICErsPRO Disallow: / User-agent: Offline Explorer Disallow: / User-agent: Openbot Disallow: / User-agent: Openfind data gatherer Disallow: / User-agent: Openfind Disallow: / User-agent: Oracle Ultra Search Disallow: / User-agent: PerMan Disallow: / User-agent: ProPowerBot/2.14 Disallow: / User-agent: ProWebWalker Disallow: / User-agent: psbot Disallow: / User-agent: Python-urllib Disallow: / User-agent: QueryN Metasearch Disallow: / User-agent: Radiation Retriever 1.1 Disallow: / User-agent: RepoMonkey Bait & Tackle/v1.01 Disallow: / User-agent: RepoMonkey Disallow: / User-agent: RMA Disallow: / User-agent: searchpreview Disallow: / User-agent: SiteSnagger Disallow: / User-agent: SpankBot Disallow: / User-agent: spanner Disallow: / User-agent: suzuran Disallow: / User-agent: Szukacz/1.4 Disallow: / User-agent: Teleport Disallow: / User-agent: TeleportPro Disallow: / User-agent: Telesoft Disallow: / User-agent: The Intraformant Disallow: / User-agent: TheNomad Disallow: / User-agent: TightTwatBot Disallow: / User-agent: toCrawl/UrlDispatcher Disallow: / User-agent: True_Robot/1.0 Disallow: / User-agent: True_Robot Disallow: / User-agent: turingos Disallow: / User-agent: TurnitinBot/1.5 Disallow: / User-agent: TurnitinBot Disallow: / User-agent: URL Control Disallow: / User-agent: URL_Spider_Pro Disallow: / User-agent: URLy Warning Disallow: / User-agent: VCI WebViewer VCI WebViewer Win32 Disallow: / User-agent: VCI Disallow: / User-agent: Web Image Collector Disallow: / User-agent: WebAuto Disallow: / User-agent: WebBandit/3.50 Disallow: / User-agent: WebBandit Disallow: / User-agent: WebCapture 2.0 Disallow: / User-agent: WebCopier v.2.2 Disallow: / User-agent: WebCopier v3.2a Disallow: / User-agent: WebCopier Disallow: / User-agent: WebEnhancer Disallow: / User-agent: WebSauger Disallow: / User-agent: Website Quester Disallow: / User-agent: Webster Pro Disallow: / User-agent: WebStripper Disallow: / User-agent: WebZip/4.0 Disallow: / User-agent: WebZIP/4.21 Disallow: / User-agent: WebZIP/5.0 Disallow: / User-agent: WebZip Disallow: / User-agent: Wget/1.5.3 Disallow: / User-agent: Wget/1.6 Disallow: / User-agent: Wget Disallow: / User-agent: wget Disallow: / User-agent: WWW-Collector-E Disallow: / User-agent: Xenu's Link Sleuth 1.1c Disallow: / User-agent: Xenu's Disallow: / User-agent: Zeus 32297 Webster Pro V2.9 Win32 Disallow: / User-agent: Zeus Link Scout Disallow: / User-agent: Zeus Disallow: /
Unfortunately solution with robots.txt won't help much, I already explained this few years ago:
There are few ways you can block them, like with robots.txt(yeah right ), some custom made scripts and htaccess file.
Problem with robots.txt is that bad boots don't obey and follow what is in it,
only good bots obey robots.txt and we don't have problems with good bots like googlebot, bingbot, slurp, ...in the first place.
So robots.txt is just a reference to the spiders which they "might" follow.
So next logical solution would be .htaccess file, because it is very easy to implement and configure one.
If you don't have one in your root dir of your server just create one naming it ".htaccess", it is that easy
http://www.imtalk.org/f19/5191-bad-b...tion-here.html
Free Tool:
IMT Website Submitter(Indexer)