The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP

Free download. Book file PDF easily for everyone and every device. You can download and read online The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP book. Happy reading The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP Bookeveryone. Download file Free Book PDF The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP Pocket Guide.

Since , we have isfy a diverse set of students in each course. To this created several spin-off courses for students with end, we have split general computational linguistics different backgrounds. Our broad goals for these courses into more specific ones, e. In section 2, we outline how we have stratified 2. We started with two primary graduate courses, Com- 1 Links to the courses, tools, and resources described in this putational Linguistics I and II.

The first introduces paper can be found on our main website: Also, by having students do presentations on their work before they hand in the final report, they can incorporate feedback from other students. A useful strategy we have found for scoring these projects is to use standard conference reviews in Computational Linguistics II. The final projects have led to several workshops and confer- ence publications for the students so far, as well Figure 1: Flow for non-seminar courses.

The topics have been quite var- courses, right: This served a computation- gers Moon and Baldridge, , lemmatization us- ally savvy segment of the student population quite ing parallel corpora Moon and Erk, , graphi- well. However, we view one of our key teaching cal visualization of articles using syntactic depen- contributions as computational linguists in a linguis- dencies Jeff Rego, CS honors thesis , and feature tics department to be providing non-computational extraction for semantic role labeling Trevor Foun- students with technical and formal skills useful for tain, CS honors thesis.

We discovered quickly that our first computational linguistics course did not fill these Working with corpora. Computational linguis- needs, and the second is not even accessible to most tics skills and techniques are tremendously valuable students. The graduate linguistics students did put for linguists using corpora. Ideally, a linguist should in the effort to learn Python for Computational Lin- be able to extract the relevant data, count occur- guistics I, but many would have preferred a much rences of phenomena, and do statistical analyses.

This led of this course, which covers corpus formats XML, us to create a new course, Working with Corpora. It also teaches Python departments such as German and English.

The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP

One of gently for liberal arts students who have never pro- the great surprises for us in our graduate courses has grammed and have only limited or no knowledge of been the interest from excellent linguistics and com- text processing. Other main topics are the compi- puter science undergraduates. One way we do this is to have a final our primary computational courses. It was used for statistical anal- quarters of the way through, iii a minute pre- yses: We have found that having The course includes only a very short two-session course projects done in this staged manner ensures introduction to working with R.

But interestingly they had courses. We briefly outline some of the missteps no problems with learning this second programming with this first course and what worked well and language after Python. This is particularly striking how we are addressing them with new courses.

This course is a boiled-down version of the graduate We have not yet used the Natural Language Computational Linguistics I taught in Fall Toolkit Loper and Bird, see Section 3. But as it, too, offers visualization and expressions, finite-state transducers, part-of-speech rapid access to meaningful results, we intend to use tagging, context-free grammar, categorial grammar, it in the future.

In particular, the NLTK allows very meaning representations, and machine translation. As many have found appeal of the course for the significant number of teaching such courses, some students truly struggled documentary linguistics students in the department. We also offer several seminars in it to go much faster. Several students had interpreted our areas of interest.

It to slow down to cover basic material like for loops. One of the key points of confusion was Spinning Straw into Gold: Automated Syntax- regular expression syntax. The syntax used in the Semantics Analysis, that is designed to overlap with textbook Jurafsky and Martin, transfers eas- the CoNLL shared task on joint dependency ily to regular expressions in Python, but is radically parsing and semantic role labeling.

The entire class different from that of XFST. For students who had is participating in the actual competition, and we never coded anything in their life, this proved ex- have been particularly pleased with how this exter- tremely frustrating. On the other hand, for computa- nal facet of the course motivates students to consider tionally savvy students, XFST was great fun, and it the topics we cover very carefully — the papers truly was an interesting new challenge after having to sit matter for the system we are building.

It provides an through very basic Python lectures. Many of them were highly satisfied that 2. Our first undergraduate course was Introduction to Computational Linguistics in Fall Our expe- Language and Computers. We had fortunately rience with this course, which had to deal with the already planned the first replacement course: Lan- classic divide in computational linguistics courses guage and Computers, based on the course designed between students with liberal arts versus computer at the Department of Linguistics at the Ohio State science backgrounds, led us to split it into two University Brew et al.

The major challenge is the to program. We designed and taught it jointly, and lack of a textbook, which means that students must added several new aspects to the course. Whereas rely heavily on lecture slides and notes. Methods and Tools for requirement for liberal arts majors. These require- Working with Corpora. This advanced under- ments were met by course content that requires un- graduate version of Working with corpora was of- derstanding and thinking about formal methods. This is likely programming and fundamental concepts such as because graduate students had already experienced regular languages and frequency distributions.

The high- level material plays an important role for such stu- Natural Language Processing. This is an de- dents: It easy, many find a new challenge in thinking about is cross-listed with computer science and assumes and communicating clearly the wider role that such knowledge of programming and formal methods in technologies play. The high-level material is even computer science, mathematics, or linguistics. It is more crucial for holding the interest of less formally designed for the significant number of students who minded students. It gives them the motivation to wish to carry on further from the courses described work through and understand calculations and com- previously.

It is also an appropriate course for un- putations that might otherwise bore them. Finally, dergraduates who have ended up taking our graduate it provides an excellent way to encourage class dis- courses for lack of such an option. A significant portion of the grad- grams that accomplish simplified versions of some uate course Computational Linguistics II also forms of the tasks discussed in the course; for example, part of the syllabus, including machine learning short programs for document retrieval and creating methods for classification tasks, language modeling, a list of email address from US census data.

The hidden Markov models, and probabilistic parsing. Though there are many them to learn how to do it for themselves. This exercise injects some healthy skepti- pects of our courses. In this section, we describe our cism into linguistics students who may have to deal experience using these as part of our courses. We are pleased with it: As with other implementation-oriented activities 3. We spreadsheet designed by Jason Eisner Eisner, use the toolkit and tutorials for several course com- for teaching hidden Markov models is fantastic.


The ging and chunking, and grammars and parsing. The tutorials and extensive documenta- homework allows students to implement an HMM tion provide novices with plenty of support outside from scratch, giving enough detail to alleviate much of the classroom, and the toolkit is powerful enough of the needless frustration that could occur with this to give plenty of room for advanced students to play. It also helps that the new edition rithms, much more concrete and apparent.

Students had very positive significantly easier and more effective. A core part of several courses is finite-state Unix command line. We feel it is important to transducers. FSTs have unique qualities for courses make sure students are well aware of the mighty about computational linguistics that are taught in Unix command line and the tools that are available linguistics department.

They are an elegant exten- for it. We usually have at least one homework as- sion of finite-state automata and are simple enough signment per course that involves doing the same that their core aspects and capabilities can be ex- task with a Python script versus a pipeline using pressed in just a few lectures.

Computer science stu- command line tools like tr, sort, grep and awk. More importantly, they can be used to ele- times preferable to writing scripts that handle every- gantly solve problems in phonology and morphol- thing, and they can see how scripts they write can ogy that linguistics students can readily appreciate. Grammar engineering with OpenCCG. The prob- grammatical information at various levels of granu- lem with using OpenCCG is that its native grammar larity while still allowing direct source text editing specification format is XML designed for machines, of the grammar.

Students in the course persevered and The third component was several online managed to complete the assignments; nonetheless, tutorials—written on as publicly available wiki it became glaringly apparent that the non-intuitive pages—for writing grammars with VisCCG and XML specification language was a major stumbling DotCCG. A pleasant discovery was the tremendous block that held students back from more interesting utility of the wiki-based tutorials. It was very easy aspects of grammar engineering. More importantly, it was possible the XML from it.

DotCCG is not only simpler—it to fix bugs or add clarifications while students were also uses several interesting devices, including sup- following the tutorials in the lab. Furthermore, port for regular expressions and string expansions. Students were vices to create a web and graphical user interface, able to create and test grammars of reasonable com- VisCCG, and develop instructional materials for plexity very quickly and with much greater ease. The goal was to provide suit- are continuing to develop and improve these materi- able interfaces and a graduated series of activities als for current courses.

The work we did produced sev- lowed students in the undergraduate Introduction to eral innovations for grammar engineering that we Computational Linguistics course Fall to test reported at the workshop on Grammar Engineering their grammars in a grammar writing assignment. Across the Frameworks Baldridge et al. This simple interface allows students to first write out a grammar on paper and then implement it and 3.

Students grasped the Shalmaneser 4 In the lexical semantics sections of our classes, word http: However, it would be preferable to give the guistics Research Center under the direction of Win- students hands-on experience with the tasks, as well fred Lehman. Lauri Karttunen, Stan Peters, and as a sense of what does and does not work, and why Bob Wall were all on the faculty of the linguistics the tasks are difficult.

After Bob Wall retired in the to be a teaching tool. Supported by an instructional and his students in computer science remained very technology grant from UT Austin, we are extend- active during this period. His Courses that only do a short segment on lexical se- efforts, along with those of Hans Boas in the Ger- mantic analysis will be able to use the web inter- man department, succeeded in producing a com- face, which does not offer the full functionality of putational linguistics curriculum, funding research, Shalmaneser in particular, no training of new clas- re establishing links with computer science, and at- sifiers , but does not require any setup.

In addition, tracting an enthusiastic group of linguistics students. We plan to have the new platform ready Austin. Altogether, we have a sizable group of for use for Fall Grammar engineering workbenches allow stu- ficial intelligence group. Despite this, it was easy dents to specify grammars declaratively. For seman- to overlook if one was considering only an individ- tic role labeling, the only possibility that has been ual department. We thus set up a site7 to improve available so far for experimenting with new features the visibility of our CL-related faculty and research is to program.

But, since semantic role labeling fea- across the university. For now, the web site is a low-cost and now developing such a language and workbench as low-effort but effective starting point. We aim for a system that will As part of these efforts, we are working to in- be usable not only in the classroom but also by re- tegrate our course offerings, including the cross- searchers who develop semantic role labeling sys- listing of the undergraduate NLP course.

Our stu- tems or who need an automatic predicate-argument dents regularly take Machine Learning and other structure analysis system. Ray Mooney will teach a graduate NLP course in Fall 4 University-wide program that will offer students a different perspective and we hope that it will drum up further interest The University of Texas at Austin has a long tra- 6 For a detailed account, see: However, for highly technical courses taught As part of the web page, we also created a wiki.

Other uses include lab information, a science or related areas in order to ensure that the ap- repository of programming tips and tricks, list of im- propriate student population is reached. At the grad- portant NLP papers, collaboration areas for projects, uate level, it is also important to provide structure and general information about computational lin- and context for each course. We are now coordinat- guistics. We see the wiki as an important reposi- ing with Ray Mooney to define a core set of com- tory of knowledge that will accumulate over time putational linguistics courses that we offer regularly and continue to benefit us and our students as it and can suggest to incoming graduate students.

It simplifies our job since we answer many will not be part of a formal degree program per se, student questions on the wiki: One of the big questions that hovers over nearly Our experience as computational linguists teaching all discussions of teaching computational linguistics and doing research in a linguistics department at a is: Or, rather, the question putational linguistics to a diverse audience.

This involves getting students to understand the to the backgrounds different populations of students importance of a strong formal basis, ranging from have with respect to programming and formal think- understanding what a tight syntax-semantics inter- ing. A key component of this is to make expec- face really means to how machine learning mod- tations about the level of technical difficulty of a els relate to questions of actual language acquisi- course clear before the start of classes and restate tion to how corpus data can or should inform lin- this information on the first day of class.

This is im- guistic analyses. It also involves revealing the cre- portant not only to ensure students do not take too ativity and complexity of language to students who challenging a course: And it involves assuring programming-wary students that a course showing linguistics students how familiar concepts will introduce them to programming gently, b en- from linguistics translate to technical questions for suring that programming-savvy students know when example, addressing agreement using feature log- there will be little programming involved or formal ics , and showing computer science students how problem solving they are likely to have already ac- familiar friends like finite-state automata and dy- quired, and c providing awareness of other courses namic programming are crucial for analyzing nat- students may be more interested in right away or af- ural language phenomena and managing complexity ter they have completed the current course.

The key is to target the courses so Another key lesson we have learned is that the for- that the background needs of each type of student mal categorization of a course within a university can be met appropriately without needing to skimp course schedule and departmental degree program on linguistic or computational complexity for those are massive factors in enrollment, both at the under- who are ready to learn about it. Computational linguis- tics is rarely a required course, but when taught in a Acknowledgments. Language Documentation and Ben Wing. Wiki and Conservation, 1: Beesley and Lauri Karttunen.

Multidisciplinary instruction with the Natural Language Toolkit. Association for Computational Linguis- tics. Lan- guage and computers: Creating an introduction for a general undergraduate audience. An interactive spreadsheet for teach- ing the forward-backward algorithm. Katrin Erk and Sebastian Pado. Shalmaneser — a flexible toolbox for semantic role assignment. Speech and language processing: Edward Loper and Steven Bird. The natu- ral language toolkit. Association for Computational Linguistics.

Taesun Moon and Jason Baldridge. Part-of-speech tagging for middle English through alignment and pro- jection of parallel diachronic texts. Taesun Moon and Katrin Erk. Minimally super- vised lemmatization scheme induction through bilin- gual parallel corpora. Inducing Combinatory Categorial Grammars with genetic algorithms. We then full-time study, or two-three years of part-time reflect on how we have approached the challenges study.

Originally designed for CS profession- als looking for additional training, the pro- of setting up the program and our future plans. With working professionals who wanted to return to school to retool for a career change in mind, we 1 Introduction designed a curriculum that can be completed in 12 months of intensive full-time study. In this way, stu- In the past two decades, there has been tremendous dents can complete the degree without leaving the progress in natural language processing and various working world for too long.

The flexibility of the part- tional Linguistics CLMA —one of the largest pro- time option has allowed us to develop a two-year grams of its kind in the United States—and high- schedule which accommodates students who need lights unique features that are key to its success. The time to get up to speed with key CS concepts. CLMA program is currently operating in its third The curriculum is designed around hands-on and year as a fee-based degree program managed jointly collaborative work which prepares students for in- by the Department of Linguistics and the Educa- dustry jobs.

At the same time, the courses are struc- tional Outreach arm of the University. The program tured around fundamental building blocks rather is distinguished by its programmatic focus, its flexi- than applications in order to teach students to think bility, its format and delivery as well as in the part- like computational linguists, and to provide them nerships that are an integral part of this degree. The courses, described tion that is primarily text-based and asynchronous. Course materials are already dissem- putational linguists working at local companies, an inated through websites, student programming work informal seminar of the computational Linguistics is done on a server cluster that is always accessed lab group which includes PhD students and focuses remotely, and most of the discussion outside of class on research methodology , and career development happens on electronic discussion boards.

The program budget sup- a new Certificate in Natural Language Technol- ports two faculty positions, one tenure-track and ogy. This three-course Certificate includes two NLP guaranteed by the College of Arts and Sciences , and courses from the Masters degree and an introduc- one two-year visiting position. In addition, and acts as a refresher course for some degree stu- they share the work of supervising MA theses and dents.

It reinforces the concepts from Linguistics, internships over the summer. The Certificate is an they each have one non-summer quarter off from alternate course of study for those students wanting teaching. A third faculty member in Computational to study a single topic in depth but who are not yet Linguistics teaches three graduate-level courses in ready to commit to the entire degree.

In addition, the program includes af- online and in-person format, streaming the content filiated instructors and guest lecturers, ranging from from the classroom to a live remote audience. This faculty members of other departments such as CS will allow us to extend the reach of the program and and Statistics to researchers from industry. A strength of the program is its emphasis on stu- In the context of current globalization trends, the dent diversity and allowance for individualized stu- need for online and distance education is grow- dent needs. The program allows for both part- ing Zondiros, , and indeed we hope that our time and full-time enrollment and includes both re- audience will extend beyond North America.

We have position that even with remote participants, students from throughout the US, as well as from the classroom remains a key part of the educational 1 To be converted to tenure-track in the future, once the pro- experience. We have thus adopted an approach that gram has a longer track-record. Lastly, the program seeks to dents with an educational foundation that is relevant foster both research and industry interests by provid- in the long term; 2 we should emphasize hands- ing both thesis and internship options.

These subtasks were then University. This board was instrumental in develop- grouped by similarity into coherent courses, and the ing the original program focus and curriculum out- courses into core and elective sets. Three topics re- line, as well as providing input from the perspective sisted classification into any particular course: In addi- internship opportunities for students, and keeping tion to understanding each subtask, working compu- the content relevant to current industry trends. CLMA students are al- fee-based revenue from that of state-run programs, ready interested in further studies in Computational marketing expertise, fiscal management, registration Linguistics, and will be exposed to a broad range services and more.

As the outreach arm of the Uni- of topics throughout the curriculum. However, we versity, UWEO works closely with non-traditional did still want to give the students an overview of students and is able to leverage its industry contacts the field so that they could see how the rest of their to assist serving this community most effectively. This is done through a two-day Lastly, partnering with UWEO also serves as a orientation at the start of each year. The orienta- method of risk management for all new degree pro- tion also introduces the three cross-cutting themes grams.

  1. Sneaker Store Athletic Shoe Shop Start Up Sample Business Plan NEW!!
  2. APIs and Other Ways of Serving Up Machine Learning Models.
  3. Das kann jederzeit auch dir passieren - 24 Episoden schlimmster Komaträume (German Edition)?
  4. SoulShift: The Measure of a Life Transformed;
  5. The Meta Model Demystified: Learn The Keys To Creating Powerful Conversational Change With NLP;
  6. #MT for Immigrant Health Communication in the U.

As a state school, the University may have mentioned above, gives the students a chance to get difficulty in getting state approval and funding for to know each other and the CLMA faculty, and pro- new degree programs unless initial need and demand vides practical information about the university such can be demonstrated persuasively. UWEO can as- as libraries, computing lab facilities, etc. De- There are six required courses: The first two are Lin- sign and implementation of coherent systems for guistics courses, and the remaining four form the practical applications, with topics varying year to NLP core courses.

Among the four NLP courses, year. In , the students collectively built a Ling Intro to Linguistic Phonetics: Intro- question answering system, which was further de- duction to the articulatory and acoustic correlates veloped into a submission to the TREC competition of phonological features. Issues covered include Jinguji et al. Ling is an estab- Ling Intro to Syntax for Computational lished course from our Linguistics curriculum. Introduction to syntactic analysis and were newly created for this program, and concepts e.

We have semantics interface, and long-distance dependen- put much effort in improving course design, as dis- cies. Emphasis is placed on formally precise en- cussed in Xia, Through the course we progressively build tasks that our program does in its core sequence, we up a consistent grammar for a fragment of English. Techniques and algorithms for associating statistics. Without such knowledge, it is all but im- relatively surface-level structures and information possible to discuss the sophisticated statistical mod- with natural language data, including tasks such as els covered in the core NLP courses.

For the two tokenization, POS tagging, morphological analysis, Linguistics required courses, the only prerequisite is language modeling, named entity recognition, shal- a college-level introductory course in Linguistics or low parsing, and word sense disambiguation. The related fields Based Learning, and the like. Students develop a pre-internship neering on topics such as Machine Learning, Graph- proposal, including a statement of the area of inter- ical Models, Artificial Intelligence, and Human- est and proposed contributions, a discussion of why Computer Interaction as well as courses in the In- the company targeted is a relevant place to do this formation School on topics such as Information Re- work, and a list of relevant references.

Once the stu- trieval. We maintain a list of pre-approved courses, dents have been offered and accepted an internship, which grows as students find additional courses of they write a literature review on existing approaches interest and petition to have them approved. The annual elective offerings in Computational At the end of the internship, students write a self- Linguistics include Multilingual Grammar Engi- evaluation which they present to the internship su- neering, as well as seminars taught by the Com- pervisor for approval and then to the faculty advisor.

If this evaluation does not indi- pus Management, Development and Use, Text-to- cate satisfactory work, the internship will not count. Speech, Multimodal Interfaces, Lexical Acquisition Students also write a post-internship report, in- for Precision Grammars, Semi-supervised and Un- cluding a description of the activities undertaken supervised Learning for NLP, and Information Ex- during the internship and their results, a discussion traction from Heterogeneous Resources.

We expect guest experts. Theoretical con- dents who wish to apply to other PhD programs in cepts introduced in lecture are put into practice the near future. An MA thesis typically involves with problem sets e. In some opened-ended projects in the Systems and Applica- cases, they may provide theoretical contributions in- tions course and the seminars. MA theses require a thorough literature re- tion is promoted through group projects as well as view, are typically longer pages , and repre- active online discussion boards where students and sent the kind of research which could be presented faculty together solve problems as they arise.

Internships counting to- tion they should take.

Customers who bought this item also bought

For those seeking internships, wards the MA degree must be relevant to Compu- we will help them identify the companies that match tational Linguistics or human language technology their interests and make the contact if possible. With the feedback from the fac- from a Linguistics background to take CS and Statis- ulty, the students will revise their proposals several tics courses before approaching the Computational times before finalizing the thesis topic. Students are Linguistics core sequence.

While full-time students encouraged to take elective courses relevant to their must start in Autumn quarter, part-time students can topic. Because the amount of research is required for start in any academic quarter. Students enrolling in our program have varied Program options Our courses are open to qual- backgrounds in Linguistics, CS and other under- ified students for single-course enrollment, allow- graduate majors.

To better prepare students to benefit from the course offerings. In either case, graduate non-matriculated course covers the following topics: In the former case, the practical ex- are asked to take an online placement test to identify perience of an internship together with the indus- the areas that they need to strengthen before enter- try connections it can provide are most valuable.

In ing the program. They can then choose to take the the latter case, a chance to do independent research summer course or study on their own. We offer various options to ment: At this pace, the program is very intense. The online option is also benefi- cial to local students, allowing them to tune in, for 5. In the school year, ual courses, the curriculum, success in getting a job three of our courses will be offered in this format, as well as for some qualitative feedback about the and we plan to extend the offerings going forward.

For the 5 Outcomes sake of brevity, we will provide a selection of ques- 5. Student and alumni responses research grants, and at least two will enroll in our Ph. The first four questions ask identified a total of 34 CL programs, 23 in the US how well the program as a whole helped the students and 11 in Western Europe. These programs vary achieve the goals of learning to think like a computa- from named degrees in Computational Linguistics or tional linguist Q1 , understanding the state of the art a similar variant, to concentrations in other degrees in Computational Linguistics Q2 , understanding and to loose courses of study.

It appears that there is the potential contributions of both machine learning one other university in the US that has enrollment as and knowledge engineering Q3 , and preparation high or higher than our own, but all other programs for a job in industry Q4. Given that this program is only in its in finding a job. These same questions were also asked with tionally, during this 3 year period, there has been an respect to individual courses.

The results were again upward trend in applications which may be a reflec- similar, although slightly lower. Each of these questions was answered by students. For the question of how Other facets of our curriculum which contribute to well the program has prepared students for their cur- its success include: For the question about how important the pro- that tie the courses together. We ask the stu- the program was very intense, but very much worth- dents to attempt real-world scale projects and then while.

The faculty consistently receives high praise; assist them in achieving these goals through provid- students enjoy the small hard-working community; ing software to work from, offering high levels of and comments indicate that the coursework is rele- online interaction to answer questions, and facili- vant for their future career.

When asked about sug- tating collaboration. By working together, the stu- gestions for improvement, students provided a num- dents can build more interesting systems than any- ber of logistical suggestions, would like to see some one could alone, and therefore explore a broader ter- degree of student mentoring, and seek to find ways ritory. Exceptional students coming from Lin- While we at first thought the program to be pri- guistics can get up to speed quickly enough to com- marily a one-year program, the intensity of the cur- plete the program on a full-time schedule and some riculum has resulted in a number students taking have , but many others benefit from being able to longer than one year to complete the program which take it more slowly, as do some students from a CS has impacted the number of students who have thus background.

We also find that having expertise in far completed. Consequently, we will consider stu- Linguistics among the students significantly benefits dent feedback from the survey which—in conjunc- the overall cohort. In the near future, we plan to expand our online of- ferings, which directly expands our audience and 6 Conclusion and future directions benefits local students as described above. We have found connecting course work to faculty research 6. We worked closely with nities for doing so. We are also expanding out inter- our advisory board to develop a course of study well- disciplinary reach within the university.

The TREC suited to training students for industry jobs, while submission was done jointly with faculty from the also striving to design a program that will remain Information School. In pursuing all of these jump in with both feet, offering the full curriculum directions, we will benefit from input from our advi- from year one. This was critical in attracting a strong sory board as well as feedback from current students and reasonably large student body.

It also provided and alumni. A Practically-Focussed Undergraduate Program. Robert Frederking, Eric H. Nyberg, Teruko Mitamura, and Jaime G. Design and Evolution of a Language Technologies Curriculum. Dan Jinguji, William D. Teach- ing Computational Linguistics at the University of Tartu: Experience, Perspectives and Challenges. Diane Neal, Lisa nad Miller. Proctor and Kim-Phuong L. Teaching language technology at the North-West University. Language technology from a European perspec- tive. The evolution of a statistical nlp course. Online, distance education and globalization: Its impact on educational access, inequality and exclusion.

This re- to be avoided given that lecture time is always too vision provides a chance to update the pro- sparse and should be used most efficiently such that grams. In this paper we introduce the curricu- there is enough room for examples, short in-course lum of a first semester B. In addition, we an- plotted these topics to see whether they are dealt alyze the syllabi of four mandatory courses of the first semester to identify overlapping con- with in a constructive way across the curricu- tent which led to redundancies. We suggest for lum. Iterative re- guistics at the University of Heidelberg, Germany, introduction could be helpful for the students if it which was taught for the first time at the Department is accompanied by a reference to the earlier men- of Computational Linguistics last winter semester.

We will also look at the underlying reasons why are GPUs are so great to use for Machine Learning workloads? It will also discuss how POCs can be moved seamlessly into productive use. GTC Europe will feature groundbreaking work from startups using artificial intelligence to transform the world in the fields of autonomous machines, cyber security, healthcare and more.

Finally, this talk presents the next generation of tools for the creative industries, powered by AI, and gives case studies on how they've been solving some of the game industries largest problems over the past year. Join this session to gain an insight to the future of game creation. Learn how to use GPUs to run 3D and camera deep learning fusion applications for autonomous driving. Cameras provide high resolution 2D information, while lidar has relatively low resolution but provides 3D data.

Smart fusing of both RGB and 3D information, in combination with AI software, enables the building of ultra-high reliability classifiers. This facilitates the required cognition application for semi-autonomous and fully autonomous driving. We'll present achievements in the field of automated truck driving, specifically the use case of lane keeping in platooning scenarios based on mirror cameras.

Vahana started in early as one of the first projects at A? The aircraft we're building doesn't need a runway, is self-piloted, and can automatically detect and avoid obstacles and other aircraft. Designed to carry a single passenger or cargo, Vahana is meant to be the first certified passenger aircraft without a pilot. We'll discuss the key challenges to develop the autonomous systems of a self-piloted air taxi that can be operated in urban environments.

Self-driving vehicles will transform the transportation industry, yet must overcome challenges that that go far beyond just technology. We'll discuss both the challenges and opportunities of autonomous mobility and highlight the recent work on autonomous vehicle systems by Optimus Ride Inc. The company develops self-driving technologies and is designing a fully autonomous system for electric vehicle fleets. The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels.

We'll present a fully GPU-accelerated implementation of stixel estimation that produces reliable results at real time 26 frames per second on the Drive PX 2 platform. More and more traditional industries begin to use AI, facing the computing platform, system management, model optimization and other challenges. This talk will describe the process of developing autonomous driving directly from the virtual environment TRONIS, a high resolution virtual environment for prototyping and safeguarding highly automated and autonomous driving functions exploiting a state of the art gaming engine as introduced by UNREAL.

The development team works on independent instances of the virtual car which build the foundation for multiple experimental setups. Learn how to adopt a MATLAB-centric workflow to design, develop, and deploy computer vision and deep learning applications on to GPUs whether on your desktop, a cluster, or on embedded Tegra platforms. Then, the trained network is augmented with traditional computer vision techniques and the application can be verified in MATLAB. A key technology challenge in computer vision for Autonomous Driving is semantic segmentation of images in a video stream, for which fully-convolutional neural networks FCNN are the state-of-the-art.

In this research, we explore the functional and non-functional performance of using a hierarchical classifier head for the FCNN versus using a single flat classifier head. Our experiments are conducted and evaluated on the Cityscapes dataset. On basis of the results, we argue that using a hierarchical classifier head for the FCNN can have specific advantages for autonomous driving. Learn how combining machine learning and computer vision with GPU computing helps to create a next-generation informational ADAS experience.

This talk will present a real-time software solution that encompasses a set of advanced algorithms to create an augmented reality for the driver, utilizing vehicle sensors, map data, telematics, and navigation guidance. The broad range of features includes augmented navigation, visualization for cases of advanced parking assistance, adaptive cruise control and lane keeping, driver infographics, driver health monitoring, support in low visibility. Our approach augments driver's visual reality with supplementary objects in real time, and works with various output devices such as head unit displays, digital clusters, and head-up displays.

The growing range of functions of ADAS and automated systems in vehicles as well as the progressive change towards agile development processes require efficient test. Testing and validation within simulation are indispensable for this as real prototypes are not available at all times and the test catalog can be driven repeatedly and reproducibly. This paper presents different approaches to be used in simulation in order to increase the efficiency of development and testing for different areas of application.

This comprises the use of virtual prototypes, the utilization of sensor models and the reuse of test scenarios throughout the entire development process, which may also be applied to train artificial intelligence. This talk details a team of 17 Udacity Self-Driving Car students as they attempted to apply deep learning algorithms to win an autonomous vehicle race.

At the Self Racing Cars event held at Thunderhill Raceway in California, the team received a car and had two days before the start of the event to work on the car. In this time, we developed a neural network using Keras and Tensorflow which steered the car based on the input from just one front-facing camera in order to navigate all turns on the racetrack.

We will discuss the events leading up to the race, development methods used, and future plans including the use of ROS and semantic segmentation. This presentation shows how driving simulators together with DNN algorithms can be used in order to streamline and facilitate the development of ADAS and Autonomous Vehicle systems. Driving Simulators provide an excellent tool to develop, test and validate control systems for automotive industry. Testing ADAS systems on the driving simulator makes it safer, more affordable and repeateble.

The robustness of the system can be tested on the simulator by altering the environmental conditions and vehicle parameters. Thanks to recent breakthroughs in AI vehicles will learn and collaborate with humans. There will be a steering wheel in the majority of vehicles for a long time. Therefore a human centric approach is needed in order to save more lives in the traffic, that is a safe combination of AI and UI. Our multi-source, multi-sensor approach leads to HD maps that have greater coverage, are more richly attributed, and have higher quality than single-source, single-sensor maps.

Hear how were weaving in more and more sources, such as AI-intensive video processing, into our map making to accelerate towards our goal of real-time and highly precise maps for safer and more comfortable driving. Geometric depth and semantic classification information is fused in the form of semantic stixels, which provide a rich and compact representation of the traffic scene.

We present some strategies to reduce the computational complexity of the algorithms. Learn how deep learning is used to process video streams to analyse human behaviour in real-time. We will detail our solution to recognise fine-grained movement patterns of people how they perform everyday actions like e. The novelty of our technical solution is that our system learns these capabilities from watching lots of video snippets showing such actions.

This is exciting because very different applications can be realised with the same algorithms as we follow a purely data-driven, machine learning approach. We will explain what new types of deep neural networks we created and how we employ our Crowd Acting tm platform to cost-efficiently acquire hundred thousands videos for that. It will be a mainstay as a vital part in most level 3 automated cars but it also has unique stand alone applications such as drowsiness and attention, functions that adress approximately half of all traffic accidents.

Starting in there will be more advanced systems going to the market based on improvements in hardware such as high resolution cameras and GPU. Around there is a third generation of in-car AI to be expected as the hardware will consist of multiple HD cameras running on the latest GPUs. We found that these networks can learn more aspects of the driving task than is commonly learned today.

We present examples of learned lane keeping, lane changes, and turns. We also introduce tools to visualize the internal information processing of the neural network and discuss the results. In this session we will discuss the challenges facing the integrator of real time vision systems in the Military applications. From video streaming and military streaming protocols through to deploying vision systems for degree situational awareness with AI capabilities. GPUs are being used for enhanced autonomy and in the defence sector and across the board from Ground Vehicles through to Naval and Air applications.

Each application space presenting its own challenges through to deployment. Come and find out how the defence industry is addressing these challenges and where the future potential of GPU enabled platforms lie. The autonomous electric car revolution is here and a bright clean future awaits. Yet as we shift to this fundamentally different technology, it becomes clear that perhaps the entire vehicle deserves a rethink. This means not just adding powerful computers to outdated vehicle platforms, but instead redesigning the agile device, for this very different future. This process doesnt start with the mechanical structure of yesteryear, instead it starts with the GPU.

Deep Learning has emerged as the most successful field of machine learning with overwhelming success in industrial speech, language and vision benchmarks. Consequently it evolved into the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input.

Currently the development of self-driving cars is one of the major technological challenges across automotive companies. We apply Deep Learning to improve real-time video data analysis for autonomous vehicles, in particular, semantic segmentation. We will describe a fast and accurate AI-based GPU accelerated Vehicle inspection system which scans the underside of moving vehicles to identify threatening objects or unlawful substances bombs, unexposed weapons and drugs , vehicle leaks, wear and tear, and any damages that would previously go unnoticed.

We'll introduce the RADLogics Virtual Resident, which uses machine learning image analysis to process the enormous amount of imaging data associated with CTs, MRIs and X-rays, and introduces within minutes, a draft reportwith key imagesinto the reporting system. We'll present several examples of automated analysis using deep learning tools, in applications of Chest CT and Chest X-ray. We'll show the algorithmic solutions used, and quantitative evaluation of the results, along with actual output into the report.

It is our goal to provide many such automated applications, to automatically detect and quantify findings thus enabling efficient and augmented reporting. As computers outperform humans at complex cognitive tasks, disruptive innovation will increasingly remap the familiar with waves of creative destruction. And in healthcare, nowhere is this more apparent or imminent than at the crossroads of Radiology and the emerging field of Clinical Data Science.

As leaders in our field, we must shepherd the innovations of cognitive computing by defining its role within diagnostic imaging, while first and foremost ensuring the continued safety of our patients. If we are dismissive, defensive or self-motivated - industry, payers and provider entities will innovate around us achieving different forms of disruption, optimized to serve their own needs.

To maintain our leadership position, as we enter the era of machine learning, it is essential that we serve our patients by directly managing the use of clinical data science towards the improvement of carea position which will only strengthen our relevance in the care process as well as in future federal, commercial and accountable care discussions. We'll explore the state of clinical data science in medical imaging and its potential to improve the quality and relevance of radiology as well as the lives of our patients. You'll learn how Triage is using deep learning to diagnose skin cancer from any smartphone.

The average wait time to see a dermatologist in the United States is 1 month and even greater in other parts of the world. In that time skin disorders can worsen or become life threatening. Triage's Co-Founder and CEO, Tory Jarmain, will demonstrate how they trained a Convolutional Neural Network to instantly detect 9 in 10 cancer cases with beyond dermatologist-level accuracy.

  • Buy for others.
  • Podcast #88: Thirteen Relationship Hacks.
  • News and Journalism in the UK (Communication and Society).
  • Vengeance of the Lump-Being (Bounty Hunters of the Palace of Amino Book 6).
  • GTC On-Demand;
  • Tory will also show how Triage's technology can identify skin disorders across 23 different categories including acne, eczema, warts and more using Deep Residual Networks. The need for helping elderly individuals or couples remain in their home is increasing as our global population ages. Cognitive processing offers opportunities to assist the elderly by processing information to identify opportunities for caregivers to offer assistance and support.

    This project seeks to demonstrate means to improve the elderlys' ability to age at home through understanding of daily activities inferred from passive sensor analysis. Majority of the healthcare data stored is stored in healthcare workflows, electronic health records, and consumer devices. This data is largely untouched. CloudMedx has built a clinical framework that uses advanced algorithms and AI to look at this data in both structured and unstructured formats using Natural Language Processing and Machine Learning to bring insights such as patient risks, outcomes, and action items to the point of care.

    The goal of the company is to save lives and improve clinical workflows. Learn how doctors aided in the design process to create authentic VR trauma room scenarios; how expert content and simulation devs crafted a VR experience that would have impact in a world where there's no room for error and why Oculus supports the program. Experiential learning is among the best ways to practice for pediatric emergencies. However, hospitals are spending millions on expensive and inefficient mannequin-based training that does not consistently offer an authentic experience for med students or offer convenient repeatability.

    Join us for a case study on a groundbreaking pilot program that brought together Children's Hospital Los Angeles with two unique VR and AI dev teams to deliver VR training simulations for the most high stakes emergencies hospitals see: Health systems worldwide need greater availability and intelligent integrated use of data and information technology. Clalit has been leading innovative interventions using clinical data to drive people-centered targeted and effective care models, for chronic disease prevention and control. Clalit actively pursues a paradigm shift to properly deal with these challenges, using IT, data and advanced analytics to transform its healthcare system to one which can bridge the silos of care provision in a patient-centered approach, and move from reactive therapeutic to proactive preventive care.

    In the presentation we will detail specific examples that allowed for reducing healthcare disparities, preventing avoidable readmissions, and improving control of key chronic diseases. In this talk, FDNA will present how deep learning is used to build an applicable framework that is used to aid in identification of hundreds of genetic disorders and help kids all over the world.

    Genetic Disorders affect one in every ten people. Many of these diseases are characterized by observable traits of the affected individuals - a 'phenotype'. In many cases, this phenotype is especially noticeable in the facial features of the patients, Down syndrome for example. But most such conditions have subtle facial patterns and are harder to diagnose. FDNA will describe their solution, its ability to generalize well for hundreds of Disorders while learning from a small amount of images per class, and its application for genetic clinicians and researchers.

    The increasing availability of large medical imaging data resources with associated clinical data, combined with the advances in the field of machine learning, hold large promises for disease diagnosis, prognosis, therapy planning and therapy monitoring. As a result, the number of researchers and companies active in this field has grown exponentially, resulting in a similar increase in the number of papers and algorithms. A number of issues need to be addressed to increase the clinical impact of the machine learning revolution in radiology.

    First, it is essential that machine learning algorithms can be seamlessly integrated in the clinical workflow. Second, the algorithm should be sufficiently robust and accurate, especially in view of data heterogeneity in clinical practice. Third, the additional clinical value of the algorithm needs to be evaluated. Fourth, it requires considerable resources to obtain regulatory approval for machine learning based algorithms.

    In this workshop, the ACR and MICCAI Society will bring together expertise from radiology, medical image computing and machine learning, to start a joint effort to address the issues above. Learn how to apply deep learning for detecting and segmenting suspicious breast masses from ultrasound images. Ultrasound images are challenging to work with due to the lack of standardization of image formation. Learn the appropriate data augmentation techniques, which do not violate the physics of ultrasound imaging. Explore the possibilities of using raw ultrasound data to increase performance.

    Ultrasound images collected from two different commercial machines are used to train an algorithm to segment suspicious breast with a mean dice coefficient of 0. The algorithm is shown to perform at par with conventional seeded algorithm. However, a drastic reduction in computation time is observed enabling real-time segmentation and detection of breast masses. It is not always easy to accelerate a complex serial algorithm with CUDA parallelization. A simple CUDA adaptation of a CPU-based implementation can improve the speed of this particular kind of sequence alignment, but it's possible to achieve order-of-magnitude improvements in throughput by organizing the implementation so as to ensure that the most compute-intensive parts of the algorithm execute on GPU threads.

    Fast, inexpensive and safe, ultrasound imaging is the modality of choice for the first level of medical diagnostics. During the session, we will present an overview of ultrasound imaging techniques in medical diagnostics, explore the future of ultrasound imaging enabled by GPU processing, as well as set out the path to the conception of a portable 3D scanner.

    We will also demonstrate our hardware developments in ultrasound platforms with GPU-based processing. Having started with one large research scanner, we have begun our migration towards more commercially-viable solutions with a small hand-held unit built on the mobile GPU NVidia Tegra X1. We'll highlight our work in liquid biopsy and non-invasive prenatal testing and how the breadth in technology offerings in semiconductor chips gives us the scale of sequencing from small panels to exomes.

    We'll discuss our analysis pipeline and the latest and greatest in algorithm development and acceleration on GPUs as well as our experiences ranging from Fermi to Pascal GPU architectures. How can we train medical deep learning models at a petabyte scale and how can these models impact clinical practice? We will discuss possible answers to these questions in the field of Computational Pathology. Pathology is in the midst of a revolution from a qualitative to a quantitative discipline. This transformation is fundamentally driven by machine learning in general and computer vision and deep learning in particular.

    The models are trained based on petabytes of image and clinical data on top of the largest DGX-1 V cluster in pathology. The goal is not only to automated cumbersome and repetitive tasks, but to impact diagnosis and treatment decisions in the clinic. This talk will focus on our recent advances in deep learning for tumor detection and segmentation, on how we train these high capacity models with annotations collected from pathologists, and how the resulting systems are implemented in the clinic.

    Machine Learning in Precision Medicine: The talk will focus on general approaches requiring machine learning to obtain image-based quantitative features, reach patient diagnoses, predict disease outcomes, and identify proper precision-treatment strategies. While the presented methods are general in nature, examples from cardiovascular disease management will be used to demonstrate the need for and power of machine learning enabled by the performance advantages of GPU computation. AI in medical imaging has the potential to provide radiology with an array of new tools that will significantly improve patient care.

    To realize this potential, AI algorithm developers must engage with physician experts and navigate domains such as radiology workflow and regulatory compliance. This session will discuss a pathway for clinical implementation, and cover ACR's efforts in areas such as use case development, validation, workflow integration, and monitoring. In this talk I will describe the research and development work on medical imaging, done at PingAn Technology and Google Cloud, covering five different tasks. I'll present the technical details of the deep learning approaches we have developed, and share with the audiences the research direction and scope in the medical fields at PingAn technology and PingAn USA Lab.

    Deep learning models give state-of-the-art results on diverse problems, but their lack of interpretability is a major problem. Consider a model trained to predict which DNA mutations cause disease: We present algorithms that provide detailed explanations for individual predictions made by a deep learning model and discover recurring patterns across the entire dataset. Our algorithms address significant limitations of existing interpretability methods. We show examples from genomics where the use of deep learning in conjunction with our interpretability algorithms leads to novel biological insights.

    Learn how to apply recent advances in GPU and open data to unravel the mysteries of biology and etiology of disease. Our team has built data driven simulated neurons using CUDA and open data, and are using this platform to identify new therapeutics for Parkinson's disease with funding from the Michael J. In this session I'll discuss the open data which enables our approach, and how we are using Nvidia Tesla cards on Microsoft Azure to dynamically scale to more than , GPU cores while managing technology costs.

    Radiological diagnosis and interpretation should not take place in a vacuum -- but today, it does. One of the greatest challenges the radiologist faces when interpreting studies is understanding the individual patient in the context of the millions of patients who have come previously. Without access to historical data, radiologists must make clinical decisions based only on their memory of recent cases and literature.

    Arterys is working to empower the radiologist with an intelligent lung nodule reference library that automatically retrieves historical cases that are relevant to the current case. The intelligent lung nodule reference library is built on top of our state-of-the-art deep learning-based lung nodule detection, segmentation and characterization system.

    As deep learning techniques have been applied to the field of healthcare, more and more AI-based medical systems continue to come forth, which are accompanied by new heterogeneity, complexity and security risks. In the real-world we've seen this sort of situation lead to demand constraints, hindering AI applications development in China's hospitals. First, we'll share our experience in building a unified GPU accelerated AI engine system to feed component-based functionality into the existing workflow of clinical routine and medical imaging.

    Then, we'll demonstrate in a pipeline of integrating the different types of AI applications detecting lung cancer, predicting childhood respiratory disease and estimating bone age as microservice to medical station, CDSS, PACS and HIS system to support medical decision-making of local clinicians. On this basis, we'll describe the purpose of establishing an open and unified, standardized, legal cooperation framework to help AI participants to enter the market in China to build collaborative ecology.

    This talk will introduce these concepts, provide examples of how they can transform healthcare, and emphasize why artificial intelligence and machine learning are relevant to them. We will also explain the limitations of these approaches and why it is paramout to engage in both phenomenological data-driven and mechanistic principle-driven modelling. Both areas are in desperate need for better infrastructures -sofrware and hardaware- giving access to computational and storage resources. The talk will be thought-provoking and eye-opening as to opportunities in this space for researchers and industries alike.

    The transformation towards value-based healthcare needs inventive ways to lower cost and increase patient health outcomes. Artificial intelligence is vital for realizing value-based care. Turning medical images into biomarkers helps to increase effectiveness of care. Perspectives and feedbacks of applying AI technologies in neuroimaging are shared, from expert radiologists and deep learning experts. These technologies can also reduce errors and improve completeness and accuracy of medical records, therefore support advanced intelligence applications based on complete patient data.

    Automated image analysis tools can help doctors find abnormalities in images with confidence, especially for the inexperienced doctors from lower tier hospitals. Clinical Decision Support CDS system is based on authoritative medical literature, large amount of expert knowledge, and real cases to improve primary doctors' ability of accurate diagnosis using complete and accurate patient information. Discuss the difficulties in digital mammography, and the computational challenges we encountered while adapting deep learning algorithms, including GAN, to digital mammography.

    Learn how we address those computational issues, and get the information of our benchmarking results using both consumer and enterprise grade GPUs. There is large promise in machine learning methods for the automated analysis of medical imaging data for supporting disease detection, diagnosis and prognosis. These examples include the extraction of quantitative imaging biomarkers that are related to presence and stage of disease, radiomics approaches for tumor classification and therapy selection, and deep learning methods for directly linking imaging data to clinically relevant outcomes.

    However, the translation of such approaches requires methods for objective validation in clinically realistic settings or clinical practice. In this talk, I will discuss the role of next generation challenges for this domain. Learn about the key types of clinical use cases for AI methods in medical imaging beyond simple image classification that will ultimately improve medical practice, as well as the critical challenges and progress in applying AI to these applications. We''ll first describe the types of medical imaging and the key clinical applications for deep learning for improving image interpretation.

    Next, we''ll describe recent developments of word-embedding methods to leverage narrative radiology reports associated with images to generate automatically rich labels for training deep learning models and a recent AI project that pushes beyond image classification and tackles the challenging problem of clinical prediction. We''ll also describe emerging methods to leverage multi-institutional data for creating AI models that do not require data sharing and recent innovative approaches of providing explanation about AI model predictions to improve clinician acceptance.

    Dive in to recent work in medical imaging, where TensorFlow is used to spot cancerous cells in gigapixel images, and helps physicians to diagnose disease. During this talk, we''ll introduce concepts in Deep Learning, and show concrete code examples you can use to train your own models. In addition to the technology, we''ll cover problem solving process of thoughtfully applying it to solve a meaningful problem. We''ll close with our favorite educational resources you can use to learn more about TensorFlow. Protecting crew health is a critical concern for NASA in preparation of long duration, deep-space missions like Mars.

    Spaceflight is known to affect immune cells. Splenic B-cells decrease during spaceflight and in ground-based physiological models. The key technical innovation presented by our work is end-to-end computation on the GPU with the GPU Data Frame GDF , running on the DGXStation, to accelerate the integration of immunoglobulin gene-segments, junctional regions, and modifications that contribute to cellular specificity and diversity. Study results are applicable to understanding processes that induce immunosuppressionlike cancer therapy, AIDS, and stressful environments here on earth.

    Learn how researchers at Stanford University are leveraging the power of GPUs to improve medical ultrasound imaging. Ultrasound imaging is a powerful diagnostic tool that can provide clinicians with feedback in real time. Until recently, ultrasound beamforming and image reconstruction has been performed using dedicated hardware in order to achieve the high frame rates necessary for real-time diagnostic imaging. Though many sophisticated techniques have been proposed to further enhance the diagnostic utility of ultrasound images, computational and hardware constraints have made translation to the clinic difficult.

    We have developed a GPU-accelerated software beamforming toolbox that enables researchers to implement custom real-time beamforming on any computer with a CUDA-capable GPU, including commercial ultrasound scanners. In this session, we will: We''ll present how we increase sensitivity in medical diagnosis system and how we develop a state-of-the-art generative deep learning model for acquiring segmented stroke lesion CT images, and demonstrate our market-ready product: We trained our diagnostic system using CT image data from thousands of patients with brain stroke and tested to see commercial feasibility of use for hospitals and mobile ambulances.

    In medical imaging, acquisition procedures and imaging signals vary across different modalities and, thus, researchers often treat them independently, introducing different models for each imaging modality. To mitigate the number of modality-specific designs, we introduced a simple yet powerful pipeline for medical image segmentation that combines fully convolutional networks FCNs with fully convolutional residual networks FC-ResNets. FCNs are used to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction.

    We''ll show results that highlight the potential of the proposed pipeline, by matching state-of-the-art performance on a variety of medical imaging modalities, including electron microscopy, computed tomography, and magnetic resonance imaging. Recent advances enable profiling from smaller patient samples than previously possible. To reduce sequencing cost, we developed a convolutional neural network that denoises data from a small number of DNA fragments, making the data suitable for various downstream tasks. Our platform aims to accelerate adoption of DNA sequencers by minimizing data requirements.

    Nanopore sequencing is a breakthrough technology that marries cutting edge semiconductor processes together with biochemistry, achieving fast, scalable, single molecule DNA sequencing. The challenge is real-time processing of gigabytes of data per second in a compact benchtop instrument. Attendees will learn how these pieces come together to build a streaming AI inference engine to solve a signal processing workflow. Analysis and performance comparisons of the new TensorCore units, available on Volta hardware, will be included.

    Image stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Image stitching is widely used in many important fields, like high resolution photo mosaics in digital maps and satellite photos or medical images. Motivated by the need to combine images produced in the study of the brain, we developed and released for free the TeraStitcher tool that we recently enhanced with a CUDA plugin that allows an astonishing speedup of the most computing intensive part of the procedure.

    The code can be easily adapted to compute different kinds of convolution. We describe how we leverage shuffle operations to guarantee an optimal load balancing among the threads and CUDA streams to hide the overhead of moving back and forth images from the CPU to the GPU when their size exceeds the amount of available memory. The speedup we obtain is such that jobs that took several hours are now completed in a few minutes.

    This talk will present the challenges and opportunities in developing a deep learning program for use in medical imaging. It will present a hands on approach to the challenges that need to be overcome and the need for a multidisciplinary approach to help define the problems and potential solutions. The role of highly curated data for training the algorithms and the challenges in creating such datasets is addressed. The annotation of data becomes a key point in training and testing the algorithms.

    The role of experts in computer vision, and radiology will be addressed and how this project can prove to be a roadmap for others planning collaborative efforts will be addressed Finally I will discuss the early results of the Felix project whose goal is nothing short of the early detection of pancreatic cancer to help improve detection and ultimately improve patient outcomes. Motion tracking with motion compensation is an important component of modern advanced diagnostic ultrasonic medical imaging with microbubble contrast agents. Search-based on sum of absolute differences a well-known technique for motion estimation is very amenable to efficient implementations, which exploit the fine grained parallelism inherent in GPUs.

    We''ll demonstrate a real-world application for motion estimation and compensation in the generation of real-time maximum intensity projections over time to create vascular roadmaps in medical images of organs, such as the liver with ultrasound contrast agents. We''ll provide CUDA kernel code examples which make this application possible as well as performance measurements demonstrating the value of instruction-level parallelism and careful control of memory access patterns for kernel performance improvement.

    We hope to provide insight to CUDA developers interested in motion estimation and compensation as well as general insight into kernel performance optimization relevant for any CUDA developer. Currently, she works as a Software Engineer at Salesforce. Open Data, Internet Society, Community.

    Open Source technology can benefit local businesses and workforce. Only because the code is open we are able to talk about it. Open code is a wonderful present to the community. Open Source in Business. New web services, based on Artificial Intelligence technology approach quickly as intelligent digital personal assistants beyond the ability of simple chatbots. Search engines will provide us with answers to all questions within the framework of conversational interfaces.

    Free and open source implementations of digital assistants are challenged with the complexity of the conversational web ecosphere of smart devices and the ability to create intelligent skills out of cloud services. The FOSSASIA community has worked hard to create such an ecosystem to establish a personal assistant framework for everyone based on the principle of privacy and collaboration.

    He is a Big Data Engineer consulting for some of the largest corporate players in Germany on search technology and digital transformation strategies. He is also architect of large search portals like the German Digital Library. Daimler uses Free and Open Source software within several of its products and thrives to support and collaborate with the Open Source community. Automakers are becoming software companies, and just like in the tech industry Open Source is the way forward.

    At the keynote Daimler will outline its engagement in the Open Source community and plans for the future. The company is also a member of the Linux Foundation and Hyperledger. We will explore examples of business that have adopted TensorFlow and Cloud ML to solve their real-world problems: He has developed open source software and web services for data mining. An experienced leader in the financial technology industry, Ramji is known for creating teams and culture that focus on pragmatic technical excellence and thought leadership, whilst maintaining obsessive focus on business goals.

    With a career in markets stretching over two decades, Ramji spent many years leading large infrastructure engineering projects for Goldman Sachs in both New York and his native city of London. After moving to Singapore in early , Ramji spent a year building low-latency trading infrastructure before joining the J.

    Morgan, where he also sits on the philanthropy committee and drives meaningful contributions towards innovation. Her team is also responsible for technology exploration and innovations. She also spent several years with the Singapore Government and led teams to plan, drive and coordinate National level IT initiatives. Liang Moung was an overseas government scholar. Frank Karlitschek started the ownCloud project in to return control over the storing and sharing of information to consumers. In he initiated the Nextcloud project to bring this idea to the next level.

    He has been involved with a variety of Free Software projects including having been a board member for the KDE community. Frank is a fellow of Open Forum Europe. It took us years to develop approaches and processes to scale projects and we are constantly reviewing and working hard on improving ourselves. So, what is next? What do we want to do next, what do we want to do better and how do we want to help more people, train developers, create better software and hardware and do good?

    We believe that the way to move forward is to provide project owners with more responsibilities and ownership. We aim to develop more and more into an organization that provides a framework for projects. We also plan to cooperate more with other organizations. Why shouldn't we run the OpenTechNights at events around the world and help to bring people together? We have made very good experience with our best practices. We have seen that newcomers can progress very fast if they feel welcome and receive help and guidance from others. After they become participants in our programs many contributors move on to support others as mentors.

    We also see that contributors have moved to large companies, but still continue to help others in the community. We want to share our experience and inspire other projects. Therefore, we plan to participate in more events, we will set up a monthly live-cast and we will run a YouTube series on these topics. While a lot of traditional school education still encourages a top-down approach we will focus our attention even more on enabling the community to collaborate on an equal level and follow the idea of sharing ideas and code freely.

    A question that any organization and project encounters over time is how to ensure the work and setup is sustainable? What settings do projects need to succeed? While there are many non-profit organizations out there developing Open Technologies and FOSS, we also see that many people are moving on to companies that often focus on proprietary solutions and prevent contributors from continuing their engagement in the Open Tech community due to financial reasons and to support their families.

    At the same time, we see many projects that would have a great potential to be Open Source and at the same time commercially successful. We invite projects to apply for the accelerator and investors and companies to team up with us. And, think about it, while many startups these days start with an idea, Open Source startups already have a product before any investment has started.

    So, these are great opportunities! The problems in this world are too big. There is a lot of injustice, we are destroying the environment and people are even fighting each others. So, please join us and let's make this a success. I wish everyone a wonderful event full of sharing and new understandings. Let's get inspired to share and learn for each other to build a better world. AI , the Open Source personal assistant. Besides all this she still keeps the eco-hotel she founded in the Vietnamese Mekong-Delta running. Hong Phuc loves learning languages and plays piano.

    The contest runs between September and January. In 12 tracks attendees can learn about the latest Open Source technologies and discuss topics from development to deployment and DevOps. We are bringing together some of the core track organizers and MCs to give you a super quick wrap up of what will happen over the next few days in tracks under the following header: Colin Charles is the Chief Evangelist at Percona. He's well known within open source communities in APAC, and has spoken at many conferences. Worked in the Linux and open source world industries for 20 years in several countries on all sides of the globe, has contributed to several open source projects and is best known for founding the Enlightenment window manager project and having written lots of graphics related code for X He provides research and development on low-level software and operating systems, particularly in an embedded or real-time context.

    His main interests are bootloaders, device drivers and high-performance networking. He can also be convinced to teach courses and workshops on a variety of networking-related topics. He was one of the main organisers of FOSDEM, the largest annual open source software conference in Europe, from the early s until He denies having any involvement with amateur radio or tabletop role playing games. Victoria Bondarchuk is a UX researcher, interested in open source and open design. Join us for an Open Tech get together with coffee and snacks, meet speakers and developers and dive into the exhibition to explore companies and projects with us.

    Nimbus is an exciting and experimental lightweight client for the Ethereum network that focuses on next-generation Ethereum technologies and running on resource-constrained devices, such as mobiles. Focusing on compilers and peer-to-peer applications, Jacek has nurtured his curiosity for software and tinkering with various open source hobby projects for almost two decades.

    With a career background ranging from high-frequency trading and finance through web development and consulting to research and academic collaborations, he recently switched gears joining Status. Cryptocurrencies have there own APIs expose for developers to access. It works as a Restful API for the blockchain network. Using those APIs you,. You can do what ever the blockchain work you want with those APIs.

    Among all animals, only humans show the curiosity for swapping and exchanging goods. This gave birth to the idea of trade which enabled large spread collaboration and pole vaulted us to spread innovation, goods and services globally. In this talk, Gaurang Torvekar, the co-founder and CTO of Attores and Indorse will speak about his experience in the blockchain industry, especially Ethereum over the last two years. Attores has been working with Ngee Ann Polytechnic for issuing their diplomas on the blockchain, while Indorse is building a decentralized professional social network.

    Gaurang will speak about how Blockchain is revolutionizing the education industry and the trends in the space over the years. A hands-on coding workshop with Google's interactive global scale data analysis tool BigQuery if you have never seen it before but have a little experience with SQL ideally. Jan Peuker is a Strategic Cloud Engineer at Google where he is working on large, distributed systems on the edge between front - and backend.

    In this workshop we will explore how Cloud Dataprep can help you to make the most out of your data by automatically detecting schemas, datatypes, possible joins and anomalies such as missing values, outliers, and duplicates. Profiling your data can be time-consuming, Dataprep helps you to save that time and go right to the data analysis. You can plug your solution into Cloud Dataprep as it is serverless and works at any scale. Topics of this session are: In this session, we will discuss the concept of serverless architecture, which is also called FAAS Functions as a service.

    We will take a closer look at what this often confusing term means and how we can take advantage to create new generation solutions. AWS Lambda being the most popular implementation yet , we will look at a sample implementation, challenges and other concerns. This tutorial session will cover. Based on previous labs, all tutorial materials will be freely available during and after the session allowing students to just watch, or follow along on their own laptop or to run the tutorial themselves after the conference. Students can just watch or follow along as they wish.

    Online resources will be provided to run the labs either on the students own laptop or using their cloud account. The Open Source label was born in February as a new way to popularise free software for business adoption. The presentation will summarize the evolution of open source licenses and the Open Source Definition OSD across two decades, explain why the concept of free open source software has grown in both relevance and popularity and explore trends for the third decade of open source.

    He leads LibreOffice marketing, PR and media relations, co-chairs the certification program, and is a spokesman for the project. Italo has contributed to several migration projects to LibreOffice in Italy, and is a LibreOffice certified migrator and trainer. He has been involved in open source projects since In his professional life, he is a marketing consultant with over 30 years of experience in hi-tech marketing and media relations.

    We will discuss the lessons we have learned with containers so far, including how Google and other internet scale companies have been developing and using containers to manage applications for over decade. His session will address the old world of node first development vs. Chris Aniszczyk is an open source executive and engineer by trade with a passion for building a better world through open collaboration. Furthermore, he's a partner at Capital Factory where he focuses on mentoring, advising and investing in open source and infrastructure focused startups. At Twitter, he created their open source program and led their open source efforts.

    In a previous life, he bootstrapped a consulting company, made many mistakes, lead and hacked on many eclipse. Can it be done profitably? It is easy to forget that neither of these models was obvious at the dawn of computing; they both had to be invented, and they're not the only ways to do it. The really great news is that dozens of ways of doing it have now been developed and widely used. This panel will explore several specific examples with panellists from quite different backgrounds. Chris is the Chief Technologist at Red Hat. Solid track record with 9 out of 18 investments fully or partly exited.

    All other investments have reached a first break even point and none has closed down. Meng started and exited two startups in the US. To his horror he discovered startup financing is currently a manual process involving corporate secretaries and expensive lawyers, hence a ripe opportunity for software innovation and the basis for an opensource startup serving a global market. Blockchain is a hyped technology. We want to know what are real use cases of Blockchain apart from the hype.

    The panelists will provide insights into their projects and plan for using blockchain. What is the role of Open Source in the projects and companies of panelists? How does the Open Source ecosystem benefit from the trend to blockchain technology? Technologies like blockchain raise questions about the impact on people and the world.

    See a Problem?

    It takes a lot of energy to generate blockchains and there is no immediate benefit except of a coin itself. There is no immediate value, product or anything a human could use except for the transactional value. What should be the stake of a socially responsible organization in regards to these questions?

    And does the value of technology outweigh its cons? Can Blockchain solve world problems and which ones? Jollen Chen is the creator and lead developer of Flowchain. You can find him online at http: Mathew is a C programmer with geo-libertarian political views , anarchist tendencies and some FOSS contributions - mostly to the BSD operating system ecosystem.

    Cryptocurrencies have captured the imagination of technologists, financiers, and economists. Perhaps even more intriguing are the long-term, diverse applications of the blockchain. By increasing transparency of cryptocurrency systems, the contained data becomes more accessible and useful.

    The Bitcoin blockchain data are now available for exploration with BigQuery. All historical data are in the big query-public-data: We hope that by making the data more transparent, users of the data can gain a deeper understanding of how cryptocurrency systems function and how they might best be used for the benefit of society. Relevant for open source creators, users, and choosers. There is the efficiency argument, scaling, time to market.

    There is the isolation argument, resilience. But in this talk, we share a different perspective: Artificial Intelligence in intelligent digital personal assistants are not simple chatbots. They are driven by the deduction rules of expert systems and the principle of machine learning.

    In this talk we learn how this technology is used in detail using the SUSI personal assistant framework. We will learn how a expert system-based personal assistant works and why it can be very easy to write skills for such a system. Explore how to build and train NLP models to handle freestyle conversation, and also attempt to make a bot clone of yourself. As a bonus, we will deploy the end result to smart speakers like Cortana on Invoke, and potentially Alexa or Google Home.

    Featured in VentureBeat as one of the top people to watch in the chatbot space, she loves working with startups and enterprises across Southeast Asia and bringing their bots to life not literally. She is also a part time bubble tea addict, cryptocurrency trader and student of life. The whole task takes about 2 weeks and is divided into following parts:. In this topic, I want to share the experience and lessons of developing first application for BSDs. I am especially enthusiastic in technologies related to computer infrastructure, such as Operating System kernel, concurrent programming, debugging, performance tuning, etc.

    On top of that, I also like to write technical blog posts in my spare time. While the Intel x architecture is undisputedly market leader in the server space, several vendors have started introducing ARM64 boards. This presentation examines the suitability of ARM64 server boards for network servers. While ARM64 is definitely slower than Intel on many workloads, it performs at least as well or better than Intel on workloads that are interesting to the internet community. In this session we'll peek into the background to understand. Tablet devices are too attractive mobile computer devices,they are inexpensive, lightweight, display, touchscreen, battery and more.

    But they can use iOS, Android, Windows only. No Linux Distributions on Tablet. New generation peoples doesn't have to need and want to use traditional computer? Smartphones and tablets are necessary to use Linux Distributions for new generation peoples. I feel that it will become an era when it is difficult for new generation peoples to use desktop Linux.

    The Linux kernel evolved rapidly from Kernl 4. Let's install Linux on Tablet and any mobile devices. I'm mobile Linux Hacker. Developing massively parallel systems is restricted by the complex tasks which need to be managed by the programmer. GPU computing provides the opportunity to parallelize data parallel algorithms while CPU can run the sequential code. With increasing algorithmic development, some new algorithms require iterations of parallel computation on the GPUs computation scale larger than GPU memory while some require multiple different data parallel algorithms to run simultaneously, which are notorious to be managed by the programmer.

    Asynchronous functions are provided for kernel launch, kernel execution and data transfer with the capability to hide the communication latency through computation. To give an example, computation on multiple CPU nodes, GPU nodes can all occur in parallel and can be synchronized when the results are required by the user. This system unleashes the potential to take computation to the exa-scale level.

    This development is currently spearheaded by the Stellar Group Community which is a consortium of global researchers. The presenter has been a contributor to this community since his Google Summer of Code participation in Meilix Generator is a new tool to build your own custom Internet kiosk for your business.

    For convenience the kiosk is based on linux and runs with the lightweight LXQT desktop environment. An internet kiosk is a special use case. A single computer is shared by an indefinite number of users. Lots of security concerns arise. In Asia and elsewhere multiple non-latin languages have to be supported. We decided to create a custom lightweight distribution, Meilix and a generator web app that allows you to preconfigure an ISO with the wallpaper and desktop settings in place. I believe to work with the community and to share ideas.

    I believe one's idea can be better with sharing it with people. People should know to produce solution rather than finding the solution. I love free and open-source software community. Because it is such that you are going through something and found some things that are useful and others didn't notice that.

    I love to implement that and will a lot more happy that being a part of that change. Unikernel is a novel software technology that links an application with OS in the form of a library and packages them into a specialized image that facilitates direct deployment on a hypervisor. Comparing to the traditional VM or the recent containers, Unikernels are smaller, more secure and efficient, making them ideal for cloud environments. There are already lots of open source projects like OSv, Rumprun and so on.

    But why these existing unikernels have yet to gain large popularity broadly? We think Unikernels are facing three major challenges: Compatibility with existing applications; 2. Lack of production support e. Lack of compelling use case; 4. Lack of standard to Unikernels. In my presentation, I will review our investigations and exploration of if-how we can convert Linux as Unikernel to eliminate these significant shortcomings, where I name this as UniLinux, and some potential but valuable use cases to Unikernels like IoT, Serverless and IO-intensive applications.

    Development Suite is a curated, integrated set of desktop tools. Desktop tools combine different components that are required by the developer to get an integrated development platform configured and running on your desktop. It is packaged in an easy-to-use installer and the components can be easily integrated and installed via the interactive web application that runs on MacOS and Windows.

    In this panel we will explore with industry experts how blockchain technologies, which aren't that new, but were made popular when Bitcoin suddenly hit the news, can be used to develop applications that, while leveraging the technology, go well beyond what people usually associate with blockchain, i. We'll look at notarization, voting, participative governments, community utilities, shared economies and services We will discuss this with industry experts from the blockchain ecosystem, users and developers alike. It has recently become possible to build an AI system with cryptographic security guarantees, private training data and decentralized training procedures.

    To make this possible, one needs to leverage recent advances in blockchain technology, homomorphic encryption and secure multi-party computation. The talk will introduce essential ideas from these diverse fields and provide and guide the listener through what's necessary to create secure, private and decentralized AI systems. We will also introduce key components of OpenMined, the leading open source project in this space, enabling developers to build practical decentralized AI systems.

    Shankar is a data scientist at Manulife. Before this he worked in academia and with startups in e-commerce, cloud systems for deep learning and AI systems for autonomous vehicles. He is particularly interested in privacy, fairness and safe usage of modern AI systems, and interpretability, statistical efficiency and tooling for machine learning. What does it look like? Blockchain technology is more than just cryptocurrencies - startups, established companies, and governments around the world are harnessing this exciting new technology to transform their businesses.

    Come learn how some of these enterpreneurs, enterprises, and public servants are building on blockchain and smart contracts and learn how you too can get started building your own applications! In February Daren joined ConsenSys, one of the largest independent blockchain technology firms globally.

    As part of the ConsenSys team, Daren specializes in delivering enterprise advisory and technology services for clients in government, financial services, and other sectors. In Dubai, Daren played a key role in supporting the Dubai government with their mandate to implement the city-wide blockchain strategy announced in October by H. Prior to joining ConsenSys, Daren spent 3 years with Deloitte in the United States as a business technology analyst and consultant. Daren started his career at Deloitte working closely with Fortune clients to implement Enterprise Resource Planning systems.

    The Internet today is plagued by many problems. From viruses and spam, to identity theft and piracy. We can solve those problems. With a virtual operating system that runs the cloud, using blockchains to secure identities and data, a virtual network layer to protect against unauthorized network access, and a virtual machine to sandbox untrusted code. This talk will describe Elastos, an Operating System for the smart web.

    It will explore the approach that Elastos takes to achieve these goals, and gives a vision of a possible future internet. Martin has been using Free Software for more than 20 years. He has lived and was active in the local Free Software community on four continents. He eventually settled in China where he now lives with his family, running a small Web Development Shop.

    He continues to be active in the Free Software community. He founded the Free Software Community Leadership Roundtable, a forum where community leaders can share and support each other. In october he joined the Elastos development team as a Community Manager. Through his career, his interest has always been Free Software that facilitates communication and collaboration and brings the world closer together. With Elastos he is continuing on his mission to create a better future for our global society.

    After 18 years in Lakoo, a mobile game developer founded by himself in and backed by Tencent and Sequoia Capital, He created an oice visual novel which later on became LikeCoin. Blockchain governance is perhaps the biggest factor that explains the current state of things and can be used to guide evolution of blockchain. Blockchain governance, like any similar governance of a complex system, consists of components, including incentives, communication, compliance and failure management.

    Optimizing each and every one of these factors will give the blockchain governance its best shot to become mainstream, revolutionary and disruptive to almost any industry in the world. Ethereum is one of the most popular and innovative Blockchain in the world. The demand on scaling the networks arise. Sharding is one of the way and we are working on it.

    This talk will introduce the recent updates of Ethereum and the basic knowledge about sharding in Ethereum. Currently I'm working as a developer in the research team under Ethereum Foundation, mainly helping with Sharding implementation. I'm interested in Blockchain, Computer Security, and System programming. Despite a myriad of projects on the blockchain IoT, few studies have investigated how an IoT blockchain system develops with open source technologies, open standards, and web technologies.

    In this presentation, Jollen will share the Flowchain case study, an open source hybrid blockchain project for the IoT. Furthermore, to provide a permissioned edge computing environment for current IoT requirements, he will adopt the Hyperledger Fabric open source, a community project under the Linux Foundation umbrella, to build a hybrid blockchain to facilitate such technical challenges. The current excitement around Bitcoin as "decentralised crypto money" has left us in a state where some of the fundamental ideas behind it -namely anonymous p2p economic exchange, technoactivism to re-enfranchise people from the centralised control of fiat currencies and other ideas loosely referred to as "crypto anarchism" are being fast forgotten.

    Not only has bitcoin and its contemporaries become defacto stores of perceived economic value rather than a medium for its exchange, of recent times a large-scale shift is taking place in the computational power balance which determines control over its network. This shift is concurrent with efforts to de-anonymise pseudonymous identities transacting on the public ledger and legally regulate AKA Tax cryptocurrency transactions. Today, various enterprises make sure we have no right to our own privacy through both direct and indirect means for profit.

    What are the design challenges for a privacy preserving, user centric, community reputation system which enables global trade and finance? Dias is a long term privacy advocate and crypto evangelist. He currently heads the engineering team at consumer app startup. He was previously the CTO of Quantified Assets, a crypto management firm and is an open source contributor. He is trying to solve the challenge of reputation and safety in an anonymous decentralized world.

    It is called synthetic because it is not operated by any state or even has any tangible value rather it seems to be a new asset that is trade-able resulting from an agreement between two individuals secretly and it is facilitated with the help of internet technology. Included in this synthetic currency is Bit coin BTC that has proved to be one of the most important one. There is already a lot of usage of digital money, for example when a person makes a transaction the system identifies the person and makes credit equivalent to the amount of deposit, which makes it digitally usable on ATM machines or in transfer from person to person or can be utilized to purchase goods or services.

    But this is not related to concept of digital currencies because a digital currency is more like a real form of currency with a characteristic of independence i. Hence the whole system of digital currency is decentralized and there is no decision making involved by politicians or governments or banks.

    Such as Bit coins work through cryptographic algorithms to make the currency digitally usable. There are people who like the idea of currency which does not involve people in grey suits i. In this talk, the focus is on the level of understanding or perception among the individuals regarding the bit coin currency in the domain of cryptocurrency. Before his PhD degree he was serving the department as Lecturer in Marketing since He was also invited at various conferences and workshops for invited talks and training sessions.

    He also managed to give short-term trainings about soft skills to various organizations in both Chinese and English languages. He helped the organizing team of the forum by ensuring the participation of business community from his residential province in China. He also coordinated meetings with business community and various chambers and regulatory bodies for participation in the forum. With a background in Management Consulting, Floyd has over 16 years of international professional experience in setting up and growing international business practices as well as advising senior clients executives on decisive topics.

    His experience includes eleven years at Capgemini and spans a variety of industry sectors and technology platforms. Based out of Singapore, Floyd helps institutions harness the potential of Blockchain technology for competitive advantage. Tech speaker in various events and conference conducted by Microsoft. Speaker in many open source hackathons conducted by Mozilla and worked on IOT, Hadoop big data and machine learning technologies.

    This talk will talk about our experience with organizing Tech Jam for the last two years and using Tech Jam as an opportunity to get children as young as 10 in coding and hardware hacking. The programme is distingushed by two main features. Open ended creation using tech - Sessions are split between learning and hacking. With short bite sized learning, students are given more time and freedom to create projects.

    Product details

    Each session caters to over students. The large number of students creates an environment for busy and engaging work where students learn from each other and are motivated by example. We discuss the specific activities and also the methodology as well as tools that we are looking into for the future. How sustainable is it do build open tech communities through coding programs, contests and hackathons?

    What goals do programs have? Are they achieving the goal of advancing Open Tech projects? William is an active promoter of Maker and STEM education for our next generation of technical students. He has been actively involved in the Maker movement since He co-leads the YouthMobile initiative, which aims at inspiring young girls and boys to drive technological innovation by acquiring the skills and confidence to develop mobile apps for sustainable development.

    He is the most experienced person of Maker Faire in Asia. He is also connected all around Makers, even in Asia and Japan. I am interested in neuroscience, synthetic biology, electrical engineering, quantum mechanics and science in general. I am a maker and biohacker. I am mostly experienced in hardware hacking and I am currently learning software specifically in neural networks. I run a biohacking and hardware hacking meetup in Singapore called Biospacesg. Daniel and Michael talk about some of the problems that come about when trying to put electronics where it should not be, and how they got around them on a budget to build their own AUV, Magni.

    Through their talk, they hope to spark interest in a rather niche field of robotics. Projectile launchers are cool and electromagnetic ones even more so.

    Practice Your Meta Model - Always!

    For decades many have been fascinated by them and with the US and Chinese Navy putting railguns on their ships, they have never been more exciting. Jing Xuan shares the charged and explosive experience of building a railgun and coilgun, blowing up Arduinos and shooting nails backwards. The device is entirely Open Hardware, from the design itself, the firmware to the source code of the apps running on the phone. The talk will demonstrate the device and provide insights into the specifics of the device, the functioning and our future plans.

    In this talk, I will lay the landscape of current applications of intracortical brain-machine interface, including discussion on movement recovery in tetraplegics, sensory recovery in blind people, and attempts at integrating both. I will briefly sketch the animal and human research on this topic in Singapore. I will finish by taking a look at possible technologies to augment normal brain function in the future. He received his B. His lab aims to understand the neural mechanisms underlying cognitive function at the level of populations of individual neurons or circuits , and to apply this knowledge to develop novel neurotechnologies.

    They love selfies , branded computer games and want to change the world. How to bridge this gap and get them to have skills that are future ready? How to make them purposeful Makers who is able to solve world's problems? I am also keen about making learning much easier.