MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "continue": {
        "gapcontinue": "SandBox",
        "continue": "gapcontinue||"
    },
    "warnings": {
        "main": {
            "*": "Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/postorius/lists/mediawiki-api-announce.lists.wikimedia.org/> for notice of API deprecations and breaking changes."
        },
        "revisions": {
            "*": "Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."
        }
    },
    "query": {
        "pages": {
            "88": {
                "pageid": 88,
                "ns": 0,
                "title": "Reading a large data file (efficiently)",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "Reading large data files can quickly become a trouble.\n\nnp.loadtxt('filename') allows an easy conversion of the file to an array, but it is unpractical, in particular if your data file size exceeds your RAM.\n\nAn other, much more efficient way (with the appropriate buffers, etc. handled by python) is using \"with open(...) as file\".\n\n<pre>\nimport numpy as np\n\nN=10000000\nbigMatrix = np.zeros((N, 12))      # same shape as the expected data. Here, we have 12 columns. \n                                   # With this N , \"bigMatrix\" is more or less 1 GB large.\niteration = 0\nwith open(filename, 'r') as f:    # this is an efficient way of handling the file.\n    for line in f:\n        bigMatrix[iteration] = np.fromstring(line, sep=' ')  # if the column separator is a space \" \". Adapt otherwise.\n        iteration +=1\n        if iteration >= N:  # in order not to exceed the matrix size, if the data is longer than N.\n            break\nbigMatrix =  bigMatrix[:iteration, :]     # in order not to have leftover zeros, if the data is shorter than N.\n</pre>\n\nthe only limitation is that you need to specify a shape (esp. the column number) in advance, but usually if you want to analyze many files with some format that you invented, this should not be a problem.\n\nA possible way to circumvent the problem of choosing N in advance is to run something like \n\n<pre>\nimport subprocess\noutput_string = subprocess.check_output(['wc -l my_data_file_name.dat'], shell=True)\nnumber_of_lines_in_file = np.fromstring(output_string, sep=' ')[0]\n</pre>\n\nand then use the resulting line count as N."
                    }
                ]
            },
            "56": {
                "pageid": 56,
                "ns": 0,
                "title": "References on algorithms",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "== On Wikipedia ==\n\n=== Linear albegra ===\n* [http://en.wikipedia.org/wiki/Lanczos_algorithm Lanczos]\n* [http://en.wikipedia.org/wiki/Arnoldi%27s_algorithm Arnoldi]\n* [http://en.wikipedia.org/wiki/Generalized_minimal_residual_method Generalized minimal residual method] (GMRES)\n* [http://en.wikipedia.org/wiki/Biconjugate_gradient_stabilized_method Biconjugate gradient stabilized method]\n* [http://en.wikipedia.org/wiki/Singular_value_decomposition Singular value decomposition] (SVD)\n\n=== Stochastic methods ===\n* [http://en.wikipedia.org/wiki/Metropolis-Hastings_algorithm Metropolis-Hastings]\n* [http://en.wikipedia.org/wiki/Quantum_Monte_Carlo Quantum Monte-Carlo]\n* [http://en.wikipedia.org/wiki/Wang_and_Landau_algorithm Wang-Landau]\n\n=== Renormalization procedures ===\n* [http://en.wikipedia.org/wiki/DMRG Density-matrix renormalization group]\n* [http://en.wikipedia.org/wiki/Time-evolving_block_decimation Time-evolving block decimation]\n\n== Literature =="
                    }
                ]
            }
        }
    }
}