It is very true that often times a system of equations can easily be set up into a vector math. I think that this was the biggest shock when I came to C is that doing vector math is not all that easy, and not native. It is a real pain when you have to take something that is linearized and solve it through substitution.
I was first introduced to GNU Octave during the Stanford ML Class (Fall 2011 - it is now a Coursera class); it was used exclusively, mainly because of the native vector/matrix math. The class was amazing; I hadn't touched vector/matrix math in over 20 years, so I had to do some "brush up".
What really intrigued me, and then it "clicked" - was how to vectorize a process (and how to think about and understand when a process can be converted in such a manner); that was a great takeaway from that class. The light really clicked for me when we had to vectorize a neural network simulation; I finally had found a reason to build/own a Beowulf cluster (or an NVidia Tesla desktop supercomputer)! Not that I've built one (can't really afford the electricity) - but at least I grasped something that was important to me, and could be used on one.
As for Python, it too has some great numerical libraries available for it; playing around with those while using it in the Udacity CS373 course showed it had similar power and usefulness (though not as "native" as Octave's implementation - but close enough!)...
One of the things that I really love is as you described, the ability to handle matrix math. In almost all other languages, this is almost an afterthought, but in Matlab, Octave, and others this is a native feature. It strikes me odd that it has not become better handled in other languages as time has gone by.
Regarding program execution speed, Octave (and of course Matlab) are specialized for matrix computation. The basic native type is a matrix (one-, two- or multi-dimensional collection of real or complex floating point numbers).
Those languages shine for calculations that can be expressed as linear algebra operations: e.g. when you are multiplying a signal vector S(i) by the vector of FIR filter coefficients F(j), which would simply be written as S*F.
Such operation is very quickly passed on from the interpreter to the underlying linear algebra libraries (BLAS, LAPACK, etc). Those libraries are usually very carefully optimized, and for long vectors the overhead of interpreting the commands is well amortized. In fact, it's quite possible that a naive compiled program in C or Fortran might end up being slower than executing such command in Octave or Matlab, unless the C program uses similar tricks as the optimized BLAS library.
Of course it's not always possible to express the compilation in 'vectorized' form, i.e. to write it as a set of linear algebra operations on entire matrices. When the interpreter has to walk through the matrices element-by-element using a similar nested loop construct as would be familiar in compiled C or Fortran for writing matrix operations, the compiled languages will be faster.
SciCos is another MatLab / Simulink clone from France -- it includes a GUI and Simulink like block diagram editor -- Having done work with French Equipment, and Working For French Companies is how this project is funded -- It also features C code generation and is a reasonable check for Matlab / Simulink -- as a second method to check the simulatiions/results.
There's almost nothing about any particular language that makes one an interpreted language and one a compiled language, although most tend to be use in one form, or the other. Almost all of the "interpreted languages" you cited are compilable and have compilers (BASIC has had compilers available for over 30 years, for example). To compund matters, not all interpretation schemes are equivalent. The BASIC example you cited is pretty much the worst-case, and isn't used very much in practice anymore by most languages. Most are reduced to some intermediate language (in varying degrees by one implementation or another) to gain runtime efficiencies.
Conversely, some classically compiled languages also have implementations that exist in interpreted form, for example, "Ch", an interpreted version C (which if memory serves, Jack Ganssle stumbled on and wrote an article about ~5+ years ago).
To confuse matters even more, some languages may be run in a (usually managed), virtual machine environment... or not. The classic Java implementations do the former, although multiple compilers do exist for Java. As far as Python goes, its runtime is in C, and the bulk of what many Python programs do is probably spent running code that was written (and optimized) in C.
Back to MATLAB/Octave... MATLAB is both piecewise and majority-compilable (perhaps to C as an intermediate step) to native code, if you choose to do so (although in MathWorks tradition, it may be something that you have to pay extra for). It's also compilable to HDLs for stuffing into FPGAs. Depending on how you partition/implement your project, it's supposed to get pretty close to native hand-written code speed. Leaving it interpreted can have its advantages, but if you have "the need for speed", that option is there. I can't speak to Octave and what, if any, path to native code it has.
Some of the typically interpreted languages have been optimized pretty heavily to get one fairly close to native speed. In some cases I don't doubt that they can actually be faster than the code that even a decent programmer may write in one of their favorite compiled languages.
Using a relatively mature clasically interpreted language, one's probably going to get 90%+ of native code speed, and in many cases a lot closer than that. In reality, most people don't write code to 90% of their highly-optimized compiled language anyways. Using an interpreted implementation may even be a net-win for most folks.
@Aeroengineer: setting up a toolchain for Eclipse seems to be rather difficult.
Not that difficult, and I've seen an assortment of Eclipse IDEs from vendors with their own extensions built in. Eclipse has a package manager for adding extensions if you don't get a customized version. There are an assortment of customized forks of Eclipse out there, including a few the vendors charge for. See http://texteditors.org/cgi-bin/wiki.pl?EclipseFamily for a no doubt partial list.
It's why Eclipse took over most of the IDE world. It can be extended with custom classes for whatever development you do and language you do it in. There are lots of other IDEs out there - see http://texteditors.org/cgi-bin/wiki.pl?IDEFamily for a list I know of - but as far as I can tell, if you develop for the Windows platform, you use Visual Studio, and if you develop anything else, you likely use Eclipse. And because Eclipse is written in Java, you can run it on anything with a reasonably current Orace/Sun Java runtime. I run Eclipse here under Windows and Linux, using the same binary distribution, and could run it under OS/X, too.
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.