- #1
Nick Levinson
- 42
- 4
Is it always safe to rely on math methods where the scientist is unable to see exactly what steps constitute a given method in the implementation the scientist is using? The increasing complexity and specialization of math branches, where even mathematicians do not always understand each others' work, the fact that a computer does not always do math the way a person was taught to do when s/he is not using a computer, and reliance on computers and proprietary closed-source math programs in which it is tough and unlawful to examine the programming suggest that an important part of refereed work must be inadequately checked.
Mathematicians would know how to check but by now there'd often be too much to be able to devote the time to doing it. The scientist who can do that probably has other research to do, to which we look forward. So, basically, there's no one.
The pure math is not my concern, but computer-ready math is often different because of computer limitations. For example, inevitably a formula must have a length limit in a computer but need not outside of a computer; so, if the length limit limits a particular formula, it must be replaced with multiple formulae which must then be combined.
I gather that some of the most-used computer programs for these purposes rely on closed source code for some math, and thus are not completely transparent. Closed-source programs use black boxes. You can see your input and get the output, but exactly how input is transformed into output is hidden. You can check that individual functions with specific example inputs produce correct outputs, but I'm not sure you can test all of the functions using the methods that are required for proofs, i.e., methods in which examples are not probative enough but abstraction is required, or that you can test holistically and not just reductively, important if an error still hidden despite the examples tested gets compounded with another error as multiple black boxes are applied to one problem. The interaction of a math program with the rest of its computing environment is such that an error might be introduced by the computing environment and have to be discovered, a fresh risk whenever hardware or software has a new version, and there are usually many associated software and hardware components that have separate versions and probably separate authorships, and usually some have bugs. Even if a high-end math program completely produces all of its math processing without handing any off to the operating system, thereby eliminating one set of inspection problems, other interactions are left.
Doubtless the top computer-program firms have highly qualified mathematicians test and correct their work, but doubtless also that's limited by trade-secrecy and budget, a model that falls far short of the peer review models used for publication of original research in refereed journals and by the effect of publication after peer review, when anyone can read the journals and report a problem they find, even if the reporter lacks qualifications and is unpaid. With proprietary closed-source software and especially firmware, even a customer who paid for it is usually unable to examine it, because they usually don't know how to parse the code (especially code wired into a hardware chip) and perhaps (like with Windows) are legally barred from reverse-engineering, decompiling, or disassembling. Some software licenses even prohibit benchmarking, although I don't know if that applies to the software used in this field.
With open source software (such as Linux or FreeBSD), the source code is available to and thoroughly examinable by anyone and can be compiled or interpreted with your own compiler or interpreter on your own computer into the object code that is an executable program, so you know that the source code you examined as carefully as you like is the source code for the program you use for your scientific investigations. Even the recent public debate over privacy due to revelations about the work of the National Security Agency (NSA) did not lead to much discussion that I could find on the security of SELinux, an NSA security enhancement package offered for Linux for anyone who wants to turn it on. Because SELinux is offered in compliance with an open-source model, confidence is apparently maintained, even though SELinux alone reportedly uses over 100,000 lines of code. Writing good open-source software for this kind of math is a huge project, and were I allocating resources I would skimp on other features, such as by writing it for only one common desktop platform and leaving most user-interface design to add-ons by other people, such as by writing open-source software under a license that allows it to be included in closed-source software that can have all the support and non-math features users may want (e.g., multiplatform compatibility, a good user interface, and many input/output interfaces) but so the math components can still be checked character-by-character by anyone.
I'm not asking merely if physicists are careful (I'm sure they are), if their computers are good (ditto), if the scientists know their math (they doubtless do), if critical computer bugs are reported and patched (I'll assume they tend to be albeit perhaps late), or if journal editors are careful (doubtless they are). I'm not objecting to firms making profits and I'm not pushing a principle that all software should be free (instead, I'm arguing for quality). I'm not arguing either way on whether scientists should modify their software (although open-source licenses generally allow it). I'm not arguing about budget allocations. I'm not trying to be provocative; this grew out of my prior thread in which I stated what I thought was already accepted but am told is not, so I'm not repeating it, but the concern is nontrivial.
The key question: Could there be one or more black boxes in computer math that have not been fully proven with all relevant versions and computing environments in public and, if so, is that fundamentally safe for the more momentous work in physics, especially for research questions on which opportunities for cross-checking are more limited?
Mathematicians would know how to check but by now there'd often be too much to be able to devote the time to doing it. The scientist who can do that probably has other research to do, to which we look forward. So, basically, there's no one.
The pure math is not my concern, but computer-ready math is often different because of computer limitations. For example, inevitably a formula must have a length limit in a computer but need not outside of a computer; so, if the length limit limits a particular formula, it must be replaced with multiple formulae which must then be combined.
I gather that some of the most-used computer programs for these purposes rely on closed source code for some math, and thus are not completely transparent. Closed-source programs use black boxes. You can see your input and get the output, but exactly how input is transformed into output is hidden. You can check that individual functions with specific example inputs produce correct outputs, but I'm not sure you can test all of the functions using the methods that are required for proofs, i.e., methods in which examples are not probative enough but abstraction is required, or that you can test holistically and not just reductively, important if an error still hidden despite the examples tested gets compounded with another error as multiple black boxes are applied to one problem. The interaction of a math program with the rest of its computing environment is such that an error might be introduced by the computing environment and have to be discovered, a fresh risk whenever hardware or software has a new version, and there are usually many associated software and hardware components that have separate versions and probably separate authorships, and usually some have bugs. Even if a high-end math program completely produces all of its math processing without handing any off to the operating system, thereby eliminating one set of inspection problems, other interactions are left.
Doubtless the top computer-program firms have highly qualified mathematicians test and correct their work, but doubtless also that's limited by trade-secrecy and budget, a model that falls far short of the peer review models used for publication of original research in refereed journals and by the effect of publication after peer review, when anyone can read the journals and report a problem they find, even if the reporter lacks qualifications and is unpaid. With proprietary closed-source software and especially firmware, even a customer who paid for it is usually unable to examine it, because they usually don't know how to parse the code (especially code wired into a hardware chip) and perhaps (like with Windows) are legally barred from reverse-engineering, decompiling, or disassembling. Some software licenses even prohibit benchmarking, although I don't know if that applies to the software used in this field.
With open source software (such as Linux or FreeBSD), the source code is available to and thoroughly examinable by anyone and can be compiled or interpreted with your own compiler or interpreter on your own computer into the object code that is an executable program, so you know that the source code you examined as carefully as you like is the source code for the program you use for your scientific investigations. Even the recent public debate over privacy due to revelations about the work of the National Security Agency (NSA) did not lead to much discussion that I could find on the security of SELinux, an NSA security enhancement package offered for Linux for anyone who wants to turn it on. Because SELinux is offered in compliance with an open-source model, confidence is apparently maintained, even though SELinux alone reportedly uses over 100,000 lines of code. Writing good open-source software for this kind of math is a huge project, and were I allocating resources I would skimp on other features, such as by writing it for only one common desktop platform and leaving most user-interface design to add-ons by other people, such as by writing open-source software under a license that allows it to be included in closed-source software that can have all the support and non-math features users may want (e.g., multiplatform compatibility, a good user interface, and many input/output interfaces) but so the math components can still be checked character-by-character by anyone.
I'm not asking merely if physicists are careful (I'm sure they are), if their computers are good (ditto), if the scientists know their math (they doubtless do), if critical computer bugs are reported and patched (I'll assume they tend to be albeit perhaps late), or if journal editors are careful (doubtless they are). I'm not objecting to firms making profits and I'm not pushing a principle that all software should be free (instead, I'm arguing for quality). I'm not arguing either way on whether scientists should modify their software (although open-source licenses generally allow it). I'm not arguing about budget allocations. I'm not trying to be provocative; this grew out of my prior thread in which I stated what I thought was already accepted but am told is not, so I'm not repeating it, but the concern is nontrivial.
The key question: Could there be one or more black boxes in computer math that have not been fully proven with all relevant versions and computing environments in public and, if so, is that fundamentally safe for the more momentous work in physics, especially for research questions on which opportunities for cross-checking are more limited?