- #36
Mark44
Mentor
- 37,660
- 9,916
Much of the discussion about incorporating file system semantics seems like a solution in search of a problem.
Yes - Aspect 1 refers to source code that is source for objects in the language. Source code as identified by the name of the object, not by the name of the file containing the code.pbuk said:I don't understand what "information retrieval (for the purposes of compiling or interpreting)" or "the data for computer code" are - are you simply talking about source code?
I agree that in current languages they are. Current languages are not the topic of this post.But these things are entirely implementation dependent:
I agree that, the design of current languages does not standardize the the relation between source code and the files containing the code. The relation is established by a set of miscellaneous conventions.in the GNU C++ compiler on Linux the source code forcin
is containted somewhere like /usr/include/stdio.h which is compiled into the executable whereas in Microsoft Visual C++ the object code forcin
is implemented in the DLL (dynamic link library) C:\Windows\System32\vcruntime140.dll which is linked in at runtime.
I agree there is a system to it, but it isn't a modern approach to handling data. It doesn't define any requirement that there be a data structure that tells which program objects are stored in which files.So the definition of include in C++ IS systematic,
It is true that C++ doesn't map what you call 'program objects' to the source files containing them, but that is because C++ does not allow any symbol to be defined more than once in all the files that are included in a compilation so it doesn't need a map. I suppose this approach could be described as not 'modern' since it originated decades ago, but I don't think this is relevant.Stephen Tashi said:I agree there is a system to it, but it isn't a modern approach to handling data. It doesn't define any requirement that there be a data structure that tells which program objects are stored in which files.
import numpy as np
from scipy.integrate import solve_ivp
const Custom = require('./custom-class');
const { readFileSync } = require('fs');
I think the way you present this does not reflects on the reason of the existence of IDEs. It's about the complexity of tasks/jobs (as payment basis of a programmer and not as some machine specific abstraction). IDEs were developed to handle complexity. Programing languages (of the future) should be able to handle complexity too, even if they are built around some kind of 'keep it pure' philosophy. It's revolving around the same thing, but it's not a cause and effect relation.Stephen Tashi said:The definition of a future language could assume the basic functions of an IDE exist and define the language in terms of those functions.
pbuk said:I still don't see what any of this has to do with an IDE, it is simply about how the source code for the language is separable into different source files, and how you refer to symbols that are defined in the language's core (and managed extension) packages.
Hardly. All of the documentation for C, C++, C#, and other languages that Microsoft implements compilers for exists purely in electronic form. I'm sure the same is true for the languages implemented under GNU are as well.Stephen Tashi said:As I mentioned before, the current model for computer language definition is hardcopy printed documents.
This has nothing to do with IDEs, but instead relates to the use of HTML and CSS (Cascading Style Sheets) technologies in electronic documents.Stephen Tashi said:If we use an electronic document as the model for defining a computer language then we can use the concept of links to implement references to specific places in electronic documents. I consider this to be using IDE technology because IDEs provide limited versions of this concept. They treat code as an electronic document. They allow treating sections of text as links to other text.
Mark44 said:Hardly. All of the documentation for C, C++, C#, and other languages that Microsoft implements compilers for exists purely in electronic form. I'm sure the same is true for the languages implemented under GNU are as well.
As far as language specifications go, such I'm reasonably sure that they are available primarily, if not exclusively, online as PDF files. For example, the ISO C++ standard is available here: https://www.iso.org/standard/68564.html.
Then I am completely lost in trying to understand your point.Stephen Tashi said:My remarks do not concern the file formats in which text that defines a computer language is stored. My remarks concern the content of the definition of computer languages.
Stephen Tashi said:As I mentioned before, the current model for computer language definition is hardcopy printed documents.
Mark44 said:Then I am completely lost in trying to understand your point.
This is what you write a couple of posts ago:
Any such documents used by C or C++, say, must adhere to the syntax of the language, which includes, in part the use of such punctuation as semicolons, single and double quotes, pound signs (used by the preprocessor), carriage return characters, end-of-file marks, and others. These documents are exclusively electronic in form.Stephen Tashi said:The definitions of current computer languages do not describe how electronic documents are written.
I don't understand what you're trying to say here. A #include preprocessor directive is absolutely a link to the file named in the directive. How else would the preprocessor "know" to insert the text of the include file into the program that is to be compiled?Stephen Tashi said:For example, there is nothing in the C language that says "#include "vastlib.h"" must be a link to some other document. The concept of a link, in that sense, is not used in defining the C language.
This makes no sense to me.Stephen Tashi said:So I say that current computer languages use hardcopy text documents as the model for what they are describing.