How it Works

(Under Construction)

Schema for (MRS)

The problem with Googlebot-based search engines (SE’s) that rank Web pages based on keyword density, relevance, and back links is the following:

1. Back links are thought to be a vote or recommendation. But how does one know the vote is actually a condemnation, or recommendation of different “value levels”?
2. In the business world, the customer is always right. In the world of free indexed organic content, the surfer is always right. Their ability to rate pages, and have a reason to click-on back links means nothing unless the back links are clicked on where ratings of connected pages can be analyzed and understood in a much more meaningful Eigenvector matrix/pyramid
3. Black hat SEO robots that hijack Web pages by placing unwanted back links on Web pages have destroyed the value of Googlebot results. Recaptcha filters can be overcome with optical character recognition (OCR) breakers and human input captcha libraries. Panda, a sub-division of Google, encouraged “rel=nofollow” hyperlink modifiers, for little-to-no back link credit, but that only encourages more black hat competition, for “rel=dofollow” Web pages as it is too hard for Panda to reach an audience of what is now 70 million websites as of 2017 and significantly increasing. Plus, white hat SEO is discouraged with less participation knowing you may not get back link credit 50%-95% of the time with human participants who don’t have a large budget to buy press releases in bulk. I believe Google and their competitors with the Googlebot license in a quasi-monopoly, benefit from black hat content as it forces the surfers to click on more expensive sponsor ads for what is believed to be better content
4. Keyword density scores (frequency, prominence, proximity, size of document, comments, meta-tags, keyword in URL string, etc.) force the writer to condone to a complicated formula that can still be de-formulated with SERP/Web position analyzers, rather than rewarding the writer for creative independent thought. It should be based on supply versus demand, which cannot possibly be tracked well by Google without unbreakable frames
5. Comments or textual input that reflect the surfer’s attitude toward the page, and the rating thereof, much like a cascading crescendo of threads in forums and social media, with earned credits, should encourage 7 billion people to rate and comment on what is now about 1 billion pages, as there is hope for humanity to survive what will be man-machine mutual-symbiosis duality. Google, Bing, Yahoo, etc., the “Googlebot” search engines, only rate, with little sophistication, a small percentage of the pages
6. There is no incentive to share a page with those you call friends in a social media pool, which we will implement here with the integration of a search engine and social media

7. True keyword relevance between pages is not properly measured. While uniqueness and punishing for plagiarism is important, they are two different scores that should not have a linear relationship and are n-factors with n-equations, and n-unknowns, that allow adjustment of Eigenvalue weighted averages over time because of the need to maximize total revenue when looking at supply versus demand curves. Synonym analysis, when looking at “dog”, for example, means “canine” gets 100%, and if 50% of dogs are mutts, then 50% for “mutts”, about 5% for “German Shepherds”, and so on (MRS) addresses all seven disadvantages, and is expected to make Googlebot-based-SE’s antiquated. Although other SE’s are necessary in the absence of a spider bot to index pages independently, MRS is now an SE “merger”, or merger of three SE’s with the native MRS.

MRS is an SE that allows signed-up, logged-in users or “raters” to earn credits on cost per action (CPA) sponsor advertising by accumulating points. The SE will be in frames, with a long, short frame on top, where the address bar allows searchers to enter keywords, then click one of three search engines – MRS, Bing, and Yahoo. The middle frame will show the search results and all the pages the searcher may click-to. At no time will the searcher be able to break from the frames when in the domain of The bottom of the page will have a frame, short and wide, where the rater can enter a radio value of 0, 1, 2, …10 (0 being worst, 10 being best) and an optional 0-250 character comment text area field, and submit, with a 24-hour block of IP address for that rated URL before they can rate it again. There will be a frame on the right hand side, tall and thin with a vertical scroll bar. This will be for comments pertaining to the page made by the raters. Below each comment will be a rating bar where the rater can rate the comment, similar to the rating bar for the rater for rating the overall page only smaller, and then reply and rate indefinitely for a cascading crescendo of threads. More participation by more surfers to comment and rate means more emphasis of a larger population helping good content, and downgrading the bad content, giving webmaster incentive to improve content without the “guessing game” of Googlebot formula crackers

The code for the rating bar is written in XML/CSS with Ajax “refreshing”, possibly every few milliseconds timed with random numbers, with human input values from a library of changing XML/CSS code to thwart robots and regex pattern matching. There will be a way to add random numbers and characters to code that .css main template file understands is within a range of acceptable values to re-define radio buttons. There will be attempts to thwart OCR robots by excluding those whose usernames and URL’s are not in the database. There will be an attempt to remove all ratings/comments when the ratings do not fall into the standard deviation of the anticipated Gaussian distribution of data. Still, the best way to thwart robots is not with laws or technology, but by keeping your advertising free – commission-based, if the website has a high enough pre-Woodrank, discussed at

See Sign-up/log-in Free as a Rater – Free Ad Credits! to learn point system and how to get discounts, win raffles, for rating and commenting on pages, and engaging in social media.

How pages are rated, data stored and analyzed, and ranked

For every URL or page grabbed from Google, Bing, or Yahoo, there is the inference that rated pages carry results that are high demand and probably already ranking high because of back links and good keyword density, in addition to being clicked-on because of good meta-titles and SE descriptions. So this is not a violation of Google’s patent, only a human search and grab mechanism where three rival SE’s still have searchers at MRS use them here and occasional click on their sponsor ads. There is no need for a spider or indexing bot, except to use optical character recognition (OCR) to identify the ratings of other pages (eg. 2.5 of five stars). Tables must be constructed for each URL and it’s original SE the rater came from, IP address, time stamp, keywords looked up, rating, comment, referral URL, username of rater, ID of URL for visible content and meta-title since previous indexing (points to a different URL table), and of course the URL being given the rating. Likewise, a table must exist for the ratings of all the comments. Another table must exist for scores associated with how high the URLs rank in a lineage or pyramid, similar to how Google ranks pages based on an Eigenvector formula. Only instead of a scalar exclusively for keyword density, the scalar will be a weighted average of the ratings, number of votes, number of comments and average ratings of comments. So in other words a vertical stack of values for an Eigenscalar. The vectors will be a one to “n” row of lineages, where by tracing referral URLs (except when they are the search engine page) one can determine the number of actual links clicked on and add the Raterank (overall score of URL based on going from oldest to newest, where time stamps are important as there must exist a “most recently visited” criteria for finding pages). Each new row represents a new next-tier back link, and can go from one to “n” for an indefinite size in width for 2-dimensional matrix.

For example, lets say URL A has a rating average of 6.5, with 5 votes, and 5 comments, and average comment rating of 4.5. Let’s say A has 4 rater back link lineages where the referral URL’s (always older time stamps, lineages only occur when rater clicks on link and both starting page and landing page are rated, and the value of high tier to low tier, as you go from left to right in 2-D matrix, has a coefficient of exponential decay, as lower tier
rate-votes are worth less to overall score then higher tiers). It might look something like this:

Eigenvector MyRatingSearch Matrix
MyRatingSearch Eigenvector matrix. The “W”‘s represent weighted averages for the importance of each rating-comment-rating value of the page in question, if a linear phenomenon. The “w”‘s represent the compounded ratings for the lineage pages, or pages that point to the page in question being “upgraded” in score, or Raterank, because they are top of a lineage, in a pyramid or array, accessed from oldest to newest, pointing upward as the rater rates pages, clicks on links, and rates a succession of pages in a cascading fashion. When a page gets comments and ratings from other surfers, the pyramid grows in size in the same way some other webmaster with a website decides to back link to a page on a different domain, only the ratings, comments, and surfer clicking activity are valuable also

The matrices multiplied-out give you the Raterank for the top page in question. MSR has the right to change coefficients that apply to the four scalar values, as it depends on the four values in how they relate to each other. So if one URL has a rating of 6.5, as does another with 6.5, but the first has 5 votes, the other 10 votes, the one with 10 votes will benefit more than the one with 5 votes, or an average of 3.5 will be penalized, as should happen for values if average initial rating <5. For now, a single compounded rating with one varying coefficient will be used, discussed below.

Exponential decay factor

Image result for natural base exponential function
Exponential decay [for the red y(x) = ln(x)], where as n or “x” goes to infinity, y does not, only tapers off to a horizontal line where y = 2.71… (irrational number).

Because the number of votes can be an indefinite number or “n”, there might be a need to calculate the page with the highest number of votes in an indexing period. The Googlebot Eigenvector for orthogonal matrix calculations is now suspected to be linear where is no deprecation of the values of lower tier back links, hence there is severe “back link Darwinism” where grand-fathered pages are hard to compete against, and without a correction to help the many lower Raterank pages, the limited 4-significant-figure precision of Raterank means too many values too close to each other, many will overlap. The vote total can be divided by the highest score, so most “vote percentages” will be much less than one. Assuming exponential decay is the best way to account for values of increasing number of votes (a power series, the only one of it’s kind, where the y value does not go to infinity, but tapers off with a horizontal line, where y = 2.71…) one only needs to multiply the rating-initial-average, if >5, by the (sum from 1 to n of [1 / n!], or [∑1/n!]), where n! = n-factorial (1 + 1/2 + 1/(1*2*3) + 1/(1*2*3*4 + …) then the final result divided by 2.71… or natural log e (the result when n goes to infinity), to produce “vote number compounded ratings” no higher than 10, equivalent to 100% (it is best to work with four significant figures for now to account for all decimal numbers). If the average rating is below 5, or in the case of the four conditionals where comments are rated and two need to be deprecated, the rate-vote-count adjusted score means you divide natural log e (2.71…) by the sum from 1 to n of (1 / n!), or (∑1/n!), and multiply that result by the rating-initial-average, the reciprocal of what otherwise happens when you need to deprecate the score based on high vote count, and improve based on a low vote count for a below average score.

Compounded rating and final Raterank Eigenvector calculation

Exponential corrections can also apply to number of comments or “comment vote numbers”, to come up with a new compounded rating for the page. The same argument can be made for an above average rating for the comments, which are thought, in most cases, to be above 5 for “endorsing endorsements”, but if a lower than 5 average page gets what should be negative comments, the raters will in all likelihood endorse with above average (>5) scores the value of the condemnations as good, and the rated page should receive more of a penalty. For rating average > 5, comment rating average < 5, there should be a penalty, for rating average < 5, comment rating average < 5, there should be an improved compounded score. Penalties/improvements because of the relationship between two numbers requires if-then conditionals based on scores being higher or lower than 5. If 0 to 10 (before Raterank calculation) can be thought of as a percentage (eg. 5=50% or 0.5) and an initial average rating of 7 is penalized by comment rating of 3, one might think to average out the two to 5 for a compounded new score. But some would say comment ratings don’t mean as much, so 3 (any comment rating of 0 to 5) should be multiplied by a coefficient, to make it a new number between >3 and <7. We will compromise for now and say 6, the midpoint between 5 and 7. That means:

Rating (R-C, or “compounded”, adjusted for comment ratings)

= [([R-initial-average + R-average-on-comments]/ 2) + (R-initial-average)]/a,

where a = 2, but this might change over time, as there will be an experiment to see if different “a” values produces different amounts of traffic, ratings, comments, comments ratings, revenue raised, etc., for MRS versus Google, Bing, and Yahoo, as measured by frame activity at MRS. So averaging out averages can be a linear phenomenon for now, not something where linear decay should be applied as we are dealing with 0-10 numbers which are actually 4 significant-figure numbers or percentages. The result before multiplying by an array of lineages to produce Raterank, based on trying to be high up a many-ratings, multi-tier pyramid, is a rating average compounded based on number of votes, number of comments, and comment average, or R-C. Hence, the matrix calculation can be re-written as:

Eigenvector Matrix MyRatingSearch Compounded
Final Raterank. Easier to compute, with single value or R-C, the compounded rating based on relationship between four factors. Raterank = R-C x (1w11 + 1/2w12 + 1w21 + …(coefficient or multiplier of) x wij (where i= final row, j=final column)). This is truly an Eigenvector, but modified with Σ 1 / n! which is not a matrix but an additional scalar, but a “dynamic scalar”. Because there will be the potential for MRS to create its own database with indefinite threading of ratings, comments on ratings, and so on down the same pyramid that is defined as the Googlebot matrix, pyramid building intertwines social media with search engines, gives more hope to new webmaster who historically cannot compete against grandfathered or big budget websites. While an initial score is based on keyword density, votes, and cascading ratings and comments, supply and demand based on time spent on a web page may be another criterion worked into the Raterank score

where R-C can be subject to changes with “a” co-efficient changes, that will be hard to reverse-engineer with Ratrerank formula crackers, interpolators, etc., similar to how Web Position Gold tries to crack the Pagerank formula.The final Raterank is divided into that of the highest Raterank of all the URL’s, and multiplied by 10, so a score of 0 to 10 is possible, in four significant figures. Indexing of all pages for new Raterank, which looks at oldest ranks and moves to the most recent ones, must occur daily with a cron job at 12 midnight Eastern Standard Time.

How SE (MRS) looks up and sorts and dumps listings

When a surfer on the MRS SE for the left most of the three SE’s looks up a keyword, the MRS will break the keyword expression into different parts for each individual word, then search all the URL’s in that URL table of the database for a literal match in:
1. meta-title, sort and dump based on Rankrate ranking
2. where raters looked the page for those of those words (in URL table where keywords are stored every time a rating is given), sort and dump based on Rankrate ranking
3. search based on keywords in comments, sort and dump based on Rankrate ranking
4. then look to the keywords in the pages actual displayed content, sort and dump based on Rankrate ranking

There will be 10 results per page, meta-title for hyperlink, first 5 complete sentences where any of the words in the keyword expression are used show up in the description, then repeat for “less literal” matches, and the dump is then finished. When you break up the keyword expression into an array of one or more individual words, say, “Los Angeles Web design”, the four-pronged search must look based on the conditional “Los” && “Angeles” && “Web” && “design” (&& means AND, || means OR), then if the expression is in any of the list of four options in proper order, Raterank sorted and dumped, then the conditional becomes:

“Los” && “Angeles” && “Web” || “design”,
“Los” && “Angeles” || “Web” || “design,
“Los” || “Angeles” || “Web” && “design”,
“Los” || “Angeles” && “Web” && “design,
“Los” || “Angeles” || “Web” || “design”,

and you are complete. So there are two (for && and ||) times three (for three empty spaces) possible combinations, or six, which defines your nested “for-loop” when you instantiate all possible string matches you look for when looking at meta-titles, keyword-search history, comments, then actual content, see Advertise for Free
 for details of high volume,10,000 keywords + advertising with keyword planner to produce hundreds of thousands of inexpensive, guaranteed no-negative ROI, top, middle, and bottom of each page.

While human activity to rate, have ratings commented on, and comments rated, is encouraged, reserves the right to employ spiders that use optical character recognition (OCR) to spider all registered Web pages of 0, 1/2, … 5 star, “thumbs up”, or other forms of human ratings, separate keyword density metric for comments on ratings and the ratings thereof for pages after descending downward from main page to all linked-to same-domain pages, create a unique metric for keyword density indexed at the same time, record hyperlink/anchor text data, create a database and a unique organic search engine ranking system, and be unique enough when compared to Googlebot for a patent and requiring no money up front for performance-only paid advertising makes our philosophy perfect and profitable enough to satisfy all corporate members and earn high recommendations from MLM-encouraged customers. Delegating indexing technology to clients around the world with JavaScript on browsers, possibly free downloadable desktop software, can defray the cost of a computer network substantially to compete against the large budgets of Google and Facebook that spend too much on a network concentrated in a small area with more risk of security flaws, natural disasters, the many local infrastructure issues they have with power outages and expensive labor interactions, as opposed to a world free network of PC’s, Macs, etc., that can “absorb the shock” of any small localized miscues and the robots can quickly spider elsewhere. Instead of organic indexing requiring a 6 week delay, we will try to make it 24 hours.

See Disclaimer of Patent Rights for explanation of patent “loss of intellectual property theft” to avoid a patent-able-concept from being stolen and used against James Dante Wood, the inventor of MRS, not seeking licensing because of leverage with overseas trust funds and lack of enforcement power because of International laws.