
|
App. Development w/out Programming IV - Examples
Other than a mention of J2EE, I don't think anyone's cited examples, so here are a couple...
(They're all very domain-specific: you couldn't write a web browser with these.)
The PLCs (Progammable Logic Controllers) that do realtime industrial process control are typically "programmed" by electricians using somewhat graphical ladder logic. Simplistically, ladder logic is a representation of the contacts and coils that an electrician would wire up to connect photo-cells to actuators, etc.
Modern PLCs do a lot more than simple digital I/O, and include analog inputs and outputs, PID control blocks, etc. The latter is a good example of development w/out programming: if you want a PID (proportional/integral/derivative) control loop, you just stick in the PID block, specify what analog channels the input and output are and a bunch of parameters and that's it.
Although there's no lines of code, PLCs are still complex to program, the programmer typically has to do memory allocation that a compliler would do, and they're limited in what they can be used for.
A related example is HMI/SCADA software that typically supervises what a bunch of plant-floor PLCs are doing (Human/Machine Interface, Supervisory Control and Data Acquisition - world's
most annoying acronyms!).
HMI packages allow someone to build a graphical front end to an industrial process by drawing pictures on the screen and linking them to what the PLCs are doing. If you have a PLC controlling a bunch of valves in some sort of chemical plant, you can use the HMI software to draw the pipes, valves, tanks, etc., link the valves to the I/O channels on the PLC and have them change color or whatever to indicate flow and status, and so on. The SCADA part is that you also draw buttons to stop/start processes, and the software can be set up to log the data from the PLC and make graphs, export to Excel or whatever.
SCADA programs are close to the idea of app. development w/out programming, but to do the whole application, you would have to know about the underlying PLCs.
I've haven't looked at it for years, but I think LabView might qualify as well. Again, it's probably more complicated that what Doc Jason has in mind.
Ward
Saturday, April 24, 2004
Simulink which is part of MATLAB lets you "program" simulations by dragging and dropping components
Tom Vu
Saturday, April 24, 2004
LabVIEW is quite impressive, once you get the hang of it. You can develop simple programs with it -- but it certainly isn't a general purpose language.
You used to be able to get a free demo (on CD, before the web took off). I'd recommend all coders have a play with it, just to see something different. It's quite good fun.
I use Matlab, but haven't touched Simulink. The Matlab language is very good for its domain.
One thing that I don't think has been mentioned is genetic programming. For those that don't know, you basically generate a large population of random programs (and restrict the generated programs to be sensible in some way), and evaluate how well the program performs against an objective function (a measure of how well the program does what you want). You then breed good programs with other good programs, using genetic rules such as cross-over (analogous to sexual recombination), random mutation and permutation (etc.) to generate a second generation of programs, which you hope will be better than the first. You repeat the process until some stopping criterion is met (often until the average fitness of the population hasn't change for some number of iterations), and, hey presto, you have a program that hopefully does the job.
The approach isn't suitable for all problems (you wouldn't write a mail client like this!), but it is useful for certain problems. For example, I recently saw a conference presentation where someone had evolved a program to detect subtle skin cancers from photos of moles. The groundtruth was annotations obtained from expert clinicians. The objective function was detection accuracy, evaluated over normal and abnormal images.
[Apologies if I missed this in the previous threads]
C Rose
Saturday, April 24, 2004
Evolutionary programming, as in the post above, is indeed an interesting candidate for “automatic programming”. It is more applicable to pattern discovery and data analysis than the creation of proper applications. Essentially, the output of those algorithms is a simple program (aka model) that expresses correlations in the data it was fed with. Gene expression programming (GEP) recently superseded genetic programming (GP) because it is a more complete mapping of evolution to computer processes. It also performs several orders of magnitude faster thus allowing serious evolutionary computation using single PCs.
<plug why=”I know the author”>
If you want more info here is nice book and several articles about GEP
http://www.gene-expression-programming.com/gep
</plug>
Cheers
JSD
JSD
Sunday, April 25, 2004
http://technetcast.ddj.com/tnc_play_stream.html?stream_id=526
Sunday, April 25, 2004
LabVIEW's target audience is people who are comfortable with circuit diagrams.
MATLAB, for numerical processing of arrays.
OpalisRobot has a GUI for IT automation.
There are innumerable GUI components, that facilitate writing GUIs; and, similarly, database and connectivity widgets.
There are some nifty graphical CASE tools, for example ROOM (Real-Time Object-Oriented Modeling) produces state charts that are testable/executable.
In fact, one of the purpose of OOP is to let you extend the 'vocabulary' of the programming language, so that you can write domain-specific programs: for example, to let people write problem-domain-specific statements such as "server.start", "socket.connect", "tree.add_item", "customer.delete" (and other, better, examples).
---
Software that I'm working on now, a medical software, will eventually be at a stage where it would benefit from 'AI'. Specifically, diagnostic support from a rule-base expert system: for example where there are 'rules' such as "If the patient has Symptom A but not Symptom B, then there's a 30% chance that they are experiencing Pathology X, therefore recommend Test C."
I don't have any recent experience with rule-based systems. I'm sure there must be many packages that support the development and deployment of rule-based systems: to enter and manage rules, to make decision trees from these rules, and to apply the rules at run-time.
Another, perhaps more complicated, area is 'discovering' those rules.
For example, I've seen research papers in which the researcher presents some complicated and apparently arbitrary mathematical formula of several variables (like "a = x squared plus zero point 7 times y divided by z plus nought point three times x") and shows that there's a strong statistical correlation between this formula and some clinical condition (for example, "70% of patients whose a > 3 develop high blood pressure, and 10 % of patients whose a < 2 develop high blood pressure, therefore 'a' is a good predictor").
What I don't know is how such apparently-arbitrary formulae are discovered (I understand statistical 'significance' or 'correlation' ... I can't imagine that they try *all* possible formulae, looking for the ones which have the best correlation with the sample population). However I would also (in addition to a system that uses rules) like to write or to have software that supports that discovery process.
I'd be happy if you could give me any references that might seem relevent to this (I'm not dealing with images, so I'm not trying to learn about AI applied specifically to image recognition).
Christopher Wells
Sunday, April 25, 2004
You can always try a "fishing expedition" by randomly mixing functions. But remember you must TEST the correlation on a NEW data set (other than the one used to guess it in the first place), otherwise it has no validity. (since it's likely that SOME random function will seem to correlate with the data just by chance, if you try enough random functions)
Dan Maas
Sunday, April 25, 2004
I'm not involved in GP, so I can't give specific introductory references -- I'm sure you can find a dedicated newsgroup via Google, and ask them.
On a different note, I'd discourage the use of the terms Artificial Intelligence, as it's misleading. There has been a lot of hot air and expectation over the years. AI is difficult to define, and people are generally don't know what it really means or how it might be achieved.
I'd advocate the use of the names of specific techniques: data mining, pattern recognition (classification), non-linear regression, density estimation, genetic programming etc.
C Rose
Sunday, April 25, 2004
Christopher, there are many ways to discover rules like Neural Networks, GP, GAs, GEP, etc. To do that you need a sample of results, say a survey of patients that did a certain number of tests and their resulting condition after some time or some treatment, i.e. whether their cancer progressed or receded. This sample (the training set) is fed to an algorithm or method, which then tries to discover the correlations in the data. These correlations are expressed as more or less complex equation like:
public int Calculate(double[] d)
{
const double ROUNDING_THRESHOLD = 0.5;
double dblTemp = 0.0;
dblTemp = (((d[0]+d[4])-d[8])*d[1]);
dblTemp += ((d[8]-(d[1]-((d[1]+d[8])+d[7])))*d[5]);
dblTemp += ((d[6]*((d[3]+d[2])+d[1]))*d[0]);
dblTemp += (d[3]*(((d[6]+(d[2]-d[7]))-d[8])*d[1]));
return (dblTemp >= ROUNDING_THRESHOLD ? 1:0);
}
This an actual model as described below:
------------------------------------------
Diagnosis of breast cancer:
The goal is to classify a tumor as either benign (0) or malignant (1) based on nine different cell analysis.
Real world data obtained from PROBEN1 (Prechelt, L., 1994. PROBEN1 - A set of neural network benchmark problems and benchmarking rules. Technical Report 21/94, Univ. Karlsruhe, Germany).
Both the technical report and the data set cancer1 used here are available for anonymous FTP from Neural Bench archive at Carnegie Mellon University (machine ftp.cs.cmu.edu, directory /afs/cs/project/connect/bench/contrib/prechelt) and from machine ftp.ira.uka.de in directory /pub/neuron. The file name in both cases is proben1.tar.gz.
------------------------------------------
The accuracy of the model is ascertained using statistical measurements and applying it to a “control” group of sample values.
Hope this helps,
JSD
JSD
Monday, April 26, 2004
"The approach isn't suitable for all problems (you wouldn't write a mail client like this!),"
Wasn't this the technique used to develop Outlook?
Jim Rankin
Monday, April 26, 2004
Recent Topics
Fog Creek Home
|