Providing accurate approaches for keyword search or question answering to access the data available on the Linked Data Web is of central importance to ensure that it can be used by non-experts. In many cases, these approaches return a large number of results that need to be provided in the right order so as to be of relevance to the user. Achieving the goal of improving the access to the Linked Data Web thus demands the provision of ranking approaches that allow sorting potentially large number of results appropriately. While such functions have been designed in previous works, they have not been evaluated exhaustively. This work addresses this research gap by proposing a formal framework designed towards comparing and evaluating different ranking functions for RDF data. The framework allows combining these rankings by means of an extension of the Spearman's footrule estimation of the upper bound of this function. We supply a benchmark with a total of 60 manually annotated entity ranks by users from USA and India recruited over Amazon Mechanical Turk. Moreover, we evaluated nine entity ranking functions over the proposed benchmark.