Jump to content

Recommended Posts

  On 8/25/2018 at 9:59 PM, span said:

most state of the art "artificial intelligence" / computer vision seems to be mostly different variations of throwing shit at the wall and seeing what sticks, with very complex shit throwing algorithms. Which makes sense considering you can't model something when you don't know how it works to begin with

Thats how science works. Throw shit at a wall to see what sticks. And learn from it. Why does it stick and all that. It's very much about modelling stuff we don't understand. Do you even know examples where that wasnt the case?

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2660908
Share on other sites

  On 8/26/2018 at 7:42 AM, goDel said:

 

  On 8/25/2018 at 9:59 PM, span said:

most state of the art "artificial intelligence" / computer vision seems to be mostly different variations of throwing shit at the wall and seeing what sticks, with very complex shit throwing algorithms. Which makes sense considering you can't model something when you don't know how it works to begin with

Thats how science works. Throw shit at a wall to see what sticks. And learn from it. Why does it stick and all that. It's very much about modelling stuff we don't understand. Do you even know examples where that wasnt the case?

 

When modelling forest fires, or cancer, or whatever, you have both some prior understanding of how they work and some data to determine whether your model is appropiate or not. But we still don't understand how human intelligence (or intelligence overall) works well enough to simulate it. Of course trial and error is involved in pretty much all scientific research. My point is AI/CV relies almost solely on trial and error

Edited by span
Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2660973
Share on other sites

And so do modelling forest fires or cancer!?

 

Not sure why there's a need for an argument in this.

 

Although if you want to argue that modelling in the AI context is different to modelling in other scientific fields, than yeah I'm completely with you. Models in other fields like AI are very much built to understand and prove hypotheses. In AI they're built just to see whether it can generalise good enough to predict other outcomes. And usually people in AI don't bother thinking about what it says about the real world. They're just happy with having built a working model. The starting hypothesis was most likely nothing more than the idea that they got data good enough to build a nice model. Without any intention to prove how or why.

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2660990
Share on other sites

  On 8/26/2018 at 12:03 AM, Zeffolia said:

I notice few people talk as much about feature extraction in comparison to the classification algorithms etc., but despite this it's essentially the primary determiner of whether a given ranking algorithm or whatever will end up working in your use case - whether you've extracted features that represent all of the information relevant to the problem you're solving, and whether they're extracted accurately with noise reduction.  I'm no AI/ML expert let alone a computer vision expert, but I can still see a pathway towards things getting a lot better in the future.  You can't rely on the DNN to do all of the heavy lifting, you need some domain specific knowledge to isolate the problem to be based on input data that's smaller and more abstract

 

Yeah featurization is often just as important as the classifier or whatever (or more so even). It depends on the data, some data might be incredibly simple, just a list of numbers for example (which at most you'll just normalise), but if you have multi-column data there are numerous different methods of creating the input vector, often the biggest decision is which columns actually need to be included, whether to sample and how, and also all the various methods of dealing with non-numeric data. The only stuff I've played around with myself has been regression analysis of price data and stock levels for my clients sales and purchases, not sure how accurate it's been because I've not actually comped it yet to subsequent real world data, but in training and evaluating I found that the most important choices were around what data to include more than how to handle the data, with different types of, more "self contained", data (e.g. linguistic, image or audio data) featurization strategies might be more important. It also seems that the more regression based things are better understood, probably because they're more similar to standard statistical methods that have been studied for a long time, classifiers and sentiment analysis less so.

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2661005
Share on other sites

  On 8/26/2018 at 5:39 PM, goDel said:

And so do modelling forest fires or cancer!?

 

Not sure why there's a need for an argument in this.

 

Although if you want to argue that modelling in the AI context is different to modelling in other scientific fields, than yeah I'm completely with you. Models in other fields like AI are very much built to understand and prove hypotheses. In AI they're built just to see whether it can generalise good enough to predict other outcomes.

 

That's what I'm trying to say... In every other context there's at least an hypothesis to prove or something to work off of. With AI there's not.

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2661017
Share on other sites

nice! looks like a sensible decision.

 

but still though, the idea to give a computer vision course in javascript is out of this world. i'd still be concerned about that. that idea shouldn't have any legs to be printed on paper.

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2661394
Share on other sites

  • 3 months later...
  On 6/4/2013 at 1:39 AM, rddm said:

Computing Science Student here. Done plenty of programming for work/school/hobby. 

 

I generally find most languages fun if you are using them where they excel. Right now I'm working in mostly C and some C#. 

Without going into too much detail I just finished something relatively complicated (for me at least) at work. It was a tuning system that ran on a laptop for audio software on another device. You would plug your laptop into the system with an Ethernet cable and tune the system with a GUI on your laptop. Wasn't the most algorithmic-ly complicated thing, but definitely a lot of code. 

 

If I had to pick a favorite though it would probably have to be Haskell. Functional programming is awesome!

http://learnyouahaskell.com/

This book is really great to get started. I met the author last year when he came to Canada (from Slovenia). Miran was super chill bad ass. 

Playing around with Haskell and other functional languages puts  a lot of focus on solving little problems, which was initially what was very fun and appealing about programming for me. It's kind of like a bunch of mini games :). 

 

Had some fun last year in school writing A* search in a bunch of different functional languages. Lisp ((((((lol)))))))), Haskell, and Scala.

"learn you a Haskell for great good" -- This author makes his guide look so deceptively easy with his mspaintings. Didn't take me long before I got information overload. Just a couple chapters in but haskellbook.com might be better for those new to programming. idk.. I feel obligated to understand a language now. Might try and apply it towards tidalcycles or something

Edited by Ovitus
Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2686012
Share on other sites

Speaking of functional languages I've started to learn me some Clojure. Seems more practical than Haskell or Lisp.

 

I've been mostly a python programmer the past few years and I love python but it's nice to learn some new language now and then. My python style is anyway generally quite functional with lots of maps, filters, lambda functions, etc.

 

Anyway, a recruiter already contacted me about Clojure work and I was thinking "can I say I'm familiar with it after creating one very simple IRC bot?", lol. In the end the schedule to start the work was too fast for me.

electro mini-album Megacity Rainfall
"cacas in igne, heus"  - Emperor Nero, AD 64

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2686051
Share on other sites

  On 12/15/2018 at 4:28 PM, mokz said:

Speaking of functional languages I've started to learn me some Clojure. Seems more practical than Haskell or Lisp.

 

I've been mostly a python programmer the past few years and I love python but it's nice to learn some new language now and then. My python style is anyway generally quite functional with lots of maps, filters, lambda functions, etc.

 

Anyway, a recruiter already contacted me about Clojure work and I was thinking "can I say I'm familiar with it after creating one very simple IRC bot?", lol. In the end the schedule to start the work was too fast for me.

What's your setup like? I'm a Vim guy but I gave emacs a chance with the cider REPL setup and was really enjoying it but then I broke it somehow and wasn't able to fix it. The whole thing seems really brittle which is unfortunate because it's a nice language. I'm mostly a Javascript guy and it sounds like my style is similar to yours.
Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2686105
Share on other sites

I'm pretty invested into Rust these days. Close to the metal while still possible to make nice abstractions with the Haskell based traits system, many functional language inspired features and much nicer ergonomics than you'd get in C++.

 

The ecosystem is very young so for now I've used it mostly in my spare time but have been able to use it at work in a few cases where I'd normally use C++.

 

Currently working a bit with a UI library called Conrod as cross-platform desktop UI development is in a terrible state with slow as molasses Electron.

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2686138
Share on other sites

  On 12/15/2018 at 11:27 PM, sweepstakes said:

 

  On 12/15/2018 at 4:28 PM, mokz said:

Speaking of functional languages I've started to learn me some Clojure. Seems more practical than Haskell or Lisp.

 

I've been mostly a python programmer the past few years and I love python but it's nice to learn some new language now and then. My python style is anyway generally quite functional with lots of maps, filters, lambda functions, etc.

 

Anyway, a recruiter already contacted me about Clojure work and I was thinking "can I say I'm familiar with it after creating one very simple IRC bot?", lol. In the end the schedule to start the work was too fast for me.

What's your setup like? I'm a Vim guy but I gave emacs a chance with the cider REPL setup and was really enjoying it but then I broke it somehow and wasn't able to fix it. The whole thing seems really brittle which is unfortunate because it's a nice language. I'm mostly a Javascript guy and it sounds like my style is similar to yours.

 

 

I'm an emacs guy myself. I have the code running on a Debian VPS with Leiningen installed and I do the coding over SSH from my mini-laptop. I haven't really set up anything fancy yet, just plain ol' emacs and shell.

 

I guess most Leiningen developers use Eclipse for development?

electro mini-album Megacity Rainfall
"cacas in igne, heus"  - Emperor Nero, AD 64

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2686144
Share on other sites

 

I don't understand anything (and I wish I would), but this is hella fascinating, interesting and beautifully visualized.

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2686177
Share on other sites

  • 11 months later...

https://www.thejakartapost.com/life/2019/11/28/go-grandmaster-says-computers-cannot-be-defeated.html

  Quote

SEOUL

The only human ever to beat Google's computer algorithm at the ancient Chinese strategy game Go decided to retire because he is convinced machines "cannot be defeated", a report said Wednesday.

South Korean Lee Se-Dol's five-match showdown with Google's artificial intelligence program AlphaGo in 2016 raised both the game's profile and fears of computer intelligence's seemingly limitless learning capability.

The 18-time world Go champion lost all but one encounter in the series, but remains the only person to have won a game against AlphaGo.

The machines have since developed much further -- an updated self-teaching version of the algorithm beat its predecessor 100 games to none.

"Even if I become the number one, there is an entity that cannot be defeated," Lee, 36, told South Korea's Yonhap news agency.

"With the debut of AI in Go games, I've realized that I'm not at the top even if I become the number one," added Lee, who retired from professional Go competition last week.

Go originated in China 3,000 years ago and has been played for centuries -- mostly in China, Japan and South Korea.

The rules are simple -- two players take turns placing black or white stones on a square board with a 19x19 grid. Whoever captures the most territory wins.

But the strategies needed to secure victory are complex, and there are said to be more possible move configurations than atoms in the universe.

Considered one of the greatest Go players of the modern era, Lee started playing at the age of five and turned pro just seven years later.

But he attributed his AlphaGo win to a "bug" in the program's response to his "tricky" play. "My white 78 was not a move that should be countered straightforwardly," he said.

Expand  

འ༔ ཨ༔ ཧ༔ ཤ༔ ས༔ མ༔

ཨོཾ་ཧ་ནུ་པྷ་ཤ་བྷ་ར་ཧེ་ཡེ་སྭཱ་ཧཱ།།

ཨཱོཾ་མ་ཏྲི་མུ་ཡེ་སལེ་འདུ།།

Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2756440
Share on other sites

  On 12/5/2019 at 12:24 PM, rhmilo said:

^ Nah ... neural networks and AI is programming.

Soon it will be the only kind of programming

Definitely not true, there are many problems they can't solve well.  Drivers will never be automatically written using neural networks, nor cryptographic algorithms.  Non-NN AI though maybe.

Edited by Zeffolia
Link to comment
https://forum.watmm.com/topic/79152-programming/page/18/#findComment-2756710
Share on other sites

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   1 member

×
×