• Will robots take our children’s jobs? Featured

    January 4, 2018 bustimize Web Designer
    Our kids will have to forget their dream job if it's not a plumber. Artificial intelligence will render half of today's professions obsolete in 20 years, but not all trades are heading for the chop.
    ...
  • Robot cities: Three urban prototypes that could revolutionise our lives

    Governments have started to see automation as the key to brighter urban futures. But what will this look like?

    Mateja Kovacic
    Sunday 6 May 2018 05:00 BST

    Before I started working on real-world robots, I wrote about their fictional and historical ancestors. This isn’t so far removed from what I do now. In factories, labs, and of course science fiction, imaginary robots keep fuelling our imagination about artificial humans and autonomous machines.

    Real-world robots remain surprisingly dysfunctional, although they are steadily infiltrating urban areas across the globe. This fourth industrial revolution driven by robots is shaping urban spaces and urban life in response to opportunities and challenges in economic, social, political and healthcare domains. Our cities are becoming too big for humans to manage.

    Good city governance enables and maintains smooth flow of things, data, and people. These include public services, traffic, and delivery services. Long queues in hospitals and banks imply poor management. Traffic congestion demonstrates that roads and traffic systems are inadequate. Goods that we increasingly order online don’t arrive fast enough. And the wifi often fails our 24/7 digital needs. In sum, urban life, characterised by environmental pollution, speedy life, traffic congestion, connectivity and increased consumption, needs robotic solutions – or so we are lead to believe.

    In the past five years, national governments have started to see automation as the key to (better) urban futures. Many cities are becoming test beds for national and local governments for experimenting with robots in social spaces, where robots have both practical purpose (to facilitate everyday life) and a very symbolic role (to demonstrate good city governance). Whether through autonomous cars, automated pharmacists, service robots in local stores, or autonomous drones delivering Amazon parcels, cities are being automated at a steady pace.

    Many large cities (Seoul, Tokyo, Shenzhen, Singapore, Dubai, London, San Francisco) serve as test beds for autonomous vehicle trials in a competitive race to develop “self-driving” cars. Automated ports and warehouses are also increasingly automated and robotised. Testing of delivery robots and drones is gathering pace beyond the warehouse gates. Automated control systems are monitoring, regulating and optimising traffic flows. Automated vertical farms are innovating production of food in “non-agricultural” urban areas around the world. New mobile health technologies carry promise of healthcare “beyond the hospital”. Social robots in many guises – from police officers to restaurant waiters – are appearing in urban public and commercial spaces.

    robot-cities-2-03-05-18.jpg
    Is this what the future holds? (Shutterstock)

    As these examples show, urban automation is taking place in fits and starts, ignoring some areas and racing ahead in others. But as yet, no one seems to be taking account of all of these various and interconnected developments. So how are we to forecast our cities of the future? Only a broad view allows us to do this. To give a sense, here are three examples: Tokyo, Dubai and Singapore.

    Tokyo

    Currently preparing to host the Olympics 2020, Japan’s government also plans to use the event to showcase many new robotic technologies. Tokyo is therefore becoming an urban living lab. The institution in charge is the Robot Revolution Realisation Council, established in 2014 by the government of Japan.

    robot-cities-3-03-05-18.jpg
    Vertical indoor farm (Shutterstock)

    The main objectives of Japan’s robotisation are economic reinvigoration, cultural branding and international demonstration. In line with this, the Olympics will be used to introduce and influence global technology trajectories. In the government’s vision for the Olympics, robot taxis transport tourists across the city, smart wheelchairs greet Paralympians at the airport, ubiquitous service robots greet customers in 20-plus languages, and interactively augmented foreigners speak with the local population in Japanese.

    Tokyo shows us what the process of state-controlled creation of a robotic city looks like.

    robot-cities-4-03-05-18.jpg
    Tokyo: city of the future (Shutterstock)

    Singapore

    Singapore, on the other hand, is a “smart city”. Its government is experimenting with robots with a different objective: as physical extensions of existing systems to improve management and control of the city.

    In Singapore, the techno-futuristic national narrative sees robots and automated systems as a “natural” extension of the existing smart urban ecosystem. This vision is unfolding through autonomous delivery robots (the Singapore Post’s delivery drone trials in partnership with AirBus helicopters) and driverless bus shuttles from Easymile, EZ10.

    Meanwhile, Singapore hotels are employing state-subsidised service robots to clean rooms and deliver linen and supplies and robots for early childhood education have been piloted to understand how robots can be used in pre-schools in the future. Health and social care is one of the fastest growing industries for robots and automation in Singapore and globally.

    Dubai

    Dubai is another emerging prototype of a state-controlled smart city. But rather than seeing robotisation simply as a way to improve the running of systems, Dubai is intensively robotising public services with the aim of creating the “happiest city on Earth”. Urban robot experimentation in Dubai reveals that authoritarian state regimes are finding innovative ways to use robots in public services, transportation, policing and surveillance.

    National governments are in competition to position themselves on the global politico-economic landscape through robotics, and they are also striving to position themselves as regional leaders. This was the thinking behind the city’s September 2017 test flight of a flying taxi developed by the German drone firm Volocopter – staged to “lead the Arab world in innovation”. Dubai’s objective is to automate 25 per cent of its transport system by 2030.

    It is currently also experimenting with Barcelona-based PAL Robotics’ humanoid police officer and Singapore-based vehicle Outsaw. If the experiments are successful, the government has announced it will robotise 25 per cent of the police force by 2030.

    While imaginary robots are fuelling our imagination more than ever – from Ghost in the Shell to Blade Runner 2049 – real-world robots make us rethink our urban lives.

    These three urban robotic living labs – Tokyo, Singapore, Dubai – help us gauge what kind of future is being created, and by whom. From hyper-robotised Tokyo to smartest Singapore and happy, crime free Dubai, these three comparisons show that, no matter what the context, robots are perceived as means to achieve global futures based on a specific national imagination. Just like the films, they demonstrate the role of the state in envisioning and creating that future.

    Mateja Kovacic is a visiting research fellow at the University of Sheffield. This article was first published in The Conversation (theconversation.com)

    https://www.independent.co.uk/life-style/design/robot-cities-urban-prototypes-future-living-technology-tokyo-singapore-dubai-a8334606.html

  • 3 Questions: Chris Zegras on designing your city’s transit system

    MIT-designed tool lets people test realistic changes to local transit networks.

    Peter Dizikes | MIT News Office
    December 8, 2017

    Have you ever wanted to change your city’s public transit system? A new digital tool developed by an MIT team lets people design alterations to transit networks and estimate the resulting improvements, based on existing data from urban transit systems. The team, led by Christopher Zegras, a professor in MIT’s Department of Urban Studies and Planning, has already tested the tool with residents in four major U.S. cities — Atlanta, Boston, New Orleans, and San Francisco — as well as in London and Santiago de Chile, and is now planning additional projects in Chile, Colombia, and South Africa. Now the researchers have released a report evaluating how the tool, called CoAXs, has fared during these tests. Zegras spoke to MIT News about the project.

    Q: What is the CoAXs project all about?

    A: It’s a tool that’s designed as a web-based user interface to allow people to explore how changes in public transportation systems would potentially impact the way they can move around a city — and to do it in a relatively intuitive way. You can increase the number of buses per hour or how frequently the trains arrive. Or you can speed up vehicles en route. We designed it to be used on a large touchscreen in small-group settings, to develop a common understanding among different types of stakeholders, with the idea that this could perhaps build new coalitions for public transportation improvements.

    Q: You have tested use of the CoAXs tool in four cities: Boston, Atlanta, New Orleans, and San Francisco, some of which are in the process of adding to their transit systems, including rail expansions and better bus networks. What did you find?

    A: The original place we deployed the tool was a project supported by the Barr Foundation here in Boston in 2015 to enhance bus rapid transit in Boston. The community we focused on was Roxbury, which has historically been disadvantaged in terms of public transit but is quite dependent on it. We tried it in workshops, and users found the CoAXs tool to be easy to use, and credible. And relatable. They could examine how the model [represented] their current transportation experiences. That led to people questioning the assumptions of the [CoAXs] models, which I think is a very important thing if you want to generate credibility — it helps them understand and intuit how it works. Then they would say: What if you did bus transit priority on this corridor, or increased the frequencies, or decreased the boarding time?

    A key element of the tool is representing benefits in terms of accessibility, or the ability to reach opportunities — such as, how many [additional] jobs you can get to in a certain amount of time. From my perspective, accessibility is the fundamental reason we have a transportation system … to get to work, to school, to see our loved ones in a reasonable amount of time. But traditionally we measure transportation through travel-time savings: three minutes per passenger, as opposed to, say, opening up 30,000 new job opportunities.

    Last year we were awarded a grant by the TransitCenter, a U.S. Foundation dedicated to urban mobility, to test the tool as a way to build enthusiasm for public transportation projects among advocacy organizations. In the Boston area we partnered in a series of face-to-face workshops with the LivableStreets Alliance [of Cambridge, Massachusetts]. Beyond seeing whether the tool generated support for transit improvements, we also aimed to test for differences in presenting benefits to users in terms of accessibility versus travel time. [That is, accessibility to more jobs through public transit.] Some participant groups were randomly assigned to use an accessibility-based version of the tool, while some used a travel-time version. And in that sense our results were a little disappointing to me, vis-a-vis accessibility: The version of the tool that reflected relationships between public transit and job accessibility generated richer conversations, but people didn’t find it easier to use. And overall levels of enthusiasm among the two groups did not vary significantly.

    We then partnered with three advocacy organizations, in New Orleans, Atlanta, and San Francisco, to see whether a remote deployment would work [with people using CoAXs outside of a group setting]. A key premise of CoAXs is that it can enhance engaged dialogue, which is important to building consensus, and we found people to engage less when using it on their own. When we had workshops, people really engaged. I think this shows that a tool like this is best used in face-to-face, group workshop settings. That said, I can also imagine a hybrid approach: having workshops, using it at home, and then reconvening.

    Q: Overall, then, what is your philosophy about blending public feedback with the analysis of specialists — how necessary is that for transit issues?

    A: This is an age-old challenge to planning: How do you meaningfully and fairly engage people? And “fairly” is important, because it means everyone should have an opportunity to contribute — but not everyone can afford to, in terms of time, and energy, and ability to participate, physically or otherwise. That’s a dimension that’s difficult to navigate.

    With CoAXs, yes, we are trying to build on the idea of “co-creation” [between designers and consumers], which has become increasingly adapted into the public realm. The Swiss, for example, have been leaders in this regard, with the Swiss Railways working directly with consumers to design services better. And this is along the same lines: Can you co-create public transportation by helping to better develop a common understanding of the nature of the problems, the possibilities, and the constraints? CoAXs is only a tool; it’s not a solution, but I think it has a role to play in enhancing the public process. Perhaps more interactive, open, data-based approaches might change, a little bit, the possibilities for enhancing public engagement. Ultimately, however, the solution isn’t digital. The question is: How can we use digitalization to improve public processes? That’s what we’re trying to find out.

    http://news.mit.edu/2017/3-questions-chris-zegras-designing-your-city-transit-system-1208

  • Artificial intelligence in action

    At the MIT-IBM Watson AI Lab, researchers are training computers to recognize dynamic events.

    Meg Murphy | School of Engineering
    April 4, 2018

    A person watching videos that show things opening — a door, a book, curtains, a blooming flower, a yawning dog — easily understands the same type of action is depicted in each clip.

    “Computer models fail miserably to identify these things. How do humans do it so effortlessly?” asks Dan Gutfreund, a principal investigator at the MIT-IBM Watson AI Laboratory and a staff member at IBM Research. “We process information as it happens in space and time. How can we teach computer models to do that?”

    Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. Launched last fall, the lab connects MIT and IBM researchers together to work on AI algorithms, the application of AI to industries, the physics of AI, and ways to use AI to advance shared prosperity.

    The Moments in Time dataset is one of the projects related to AI algorithms that is funded by the lab. It pairs Gutfreund with Aude Oliva, a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory, as the project’s principal investigators. Moments in Time is built on a collection of 1 million annotated videos of dynamic events unfolding within three seconds. Gutfreund and Oliva, who is also the MIT executive director at the MIT-IBM Watson AI Lab, are using these clips to address one of the next big steps for AI: teaching machines to recognize actions.

    Learning from dynamic scenes

    The goal is to provide deep-learning algorithms with large coverage of an ecosystem of visual and auditory moments that may enable models to learn information that isn’t necessarily taught in a supervised manner and to generalize to novel situations and tasks, say the researchers.

    “As we grow up, we look around, we see people and objects moving, we hear sounds that people and object make. We have a lot of visual and auditory experiences. An AI system needs to learn the same way and be fed with videos and dynamic information,” Oliva says.

    For every action category in the dataset, such as cooking, running, or opening, there are more than 2,000 videos. The short clips enable computer models to better learn the diversity of meaning around specific actions and events.

    “This dataset can serve as a new challenge to develop AI models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis,” Oliva adds, describing the factors involved. Events can include people, objects, animals, and nature. They may be symmetrical in time — for example, opening means closing in reverse order. And they can be transient or sustained.

    Oliva and Gutfreund, along with additional researchers from MIT and IBM, met weekly for more than a year to tackle technical issues, such as how to choose the action categories for annotations, where to find the videos, and how to put together a wide array so the AI system learns without bias. The team also developed machine-learning models, which were then used to scale the data collection. “We aligned very well because we have the same enthusiasm and the same goal,” says Oliva.

    Augmenting human intelligence

    One key goal at the lab is the development of AI systems that move beyond specialized tasks to tackle more complex problems and benefit from robust and continuous learning. “We are seeking new algorithms that not only leverage big data when available, but also learn from limited data to augment human intelligence,” says Sophie V. Vandebroek, chief operating officer of IBM Research, about the collaboration.

    In addition to pairing the unique technical and scientific strengths of each organization, IBM is also bringing MIT researchers an influx of resources, signaled by its $240 million investment in AI efforts over the next 10 years, dedicated to the MIT-IBM Watson AI Lab. And the alignment of MIT-IBM interest in AI is proving beneficial, according to Oliva.

    “IBM came to MIT with an interest in developing new ideas for an artificial intelligence system based on vision. I proposed a project where we build data sets to feed the model about the world. It had not been done before at this level. It was a novel undertaking. Now we have reached the milestone of 1 million videos for visual AI training, and people can go to our website, download the dataset and our deep-learning computer models, which have been taught to recognize actions.”

    Qualitative results so far have shown models can recognize moments well when the action is well-framed and close up, but they misfire when the category is fine-grained or there is background clutter, among other things. Oliva says that MIT and IBM researchers have submitted an article describing the performance of neural network models trained on the dataset, which itself was deepened by shared viewpoints. “IBM researchers gave us ideas to add action categories to have more richness in areas like health care and sports. They broadened our view. They gave us ideas about how AI can make an impact from the perspective of business and the needs of the world,” she says.

    This first version of the Moments in Time dataset is one of the largest human-annotated video datasets capturing visual and audible short events, all of which are tagged with an action or activity label among 339 different classes that include a wide range of common verbs. The researchers intend to produce more datasets with a variety of levels of abstraction to serve as stepping stones toward the development of learning algorithms that can build analogies between things, imagine and synthesize novel events, and interpret scenarios.

    In other words, they are just getting started, says Gutfreund. “We expect the Moments in Time dataset to enable models to richly understand actions and dynamics in videos.”

    http://news.mit.edu/2018/mit-ibm-watson-ai-lab-computers-dynamic-events-0405

  • Solving global business problems with data analytics

    David Simchi-Levi leads the Accenture and MIT Alliance in Business Analytics to develop novel solutions to the most pressing challenges faced by global companies.

    Daniel de Wolff | MIT Industrial Liaison Program
    May 15, 2018

    David Simchi-Levi is a professor of engineering systems with appointments at the Institute for Data, Systems, and Society and the Department of Civil and Environmental Engineering (CEE) at MIT. His research focuses on developing and implementing robust and efficient techniques for supply chains and revenue management. He has founded three companies in the fields of supply chain and business analytics: LogicTools, a venture focused on supply chain analytics, which became a part of IBM; OPS Rules, a business analytics venture that was acquired by Accenture Analytics; and Opalytics, which focuses on cloud computing for business analytics.

    In addition to his role as a professor of engineering systems, Simchi-Levi leads the Accenture and MIT Alliance in Business Analytics. The alliance brings together MIT faculty, PhD students, and a host of partner companies to solve some of the most pressing challenges global organizations are facing today. The alliance is cross-industry, collaborating with companies in sectors ranging from retail space, to government and financial services, to the airline industry. This diversity enables the alliance to be cross-functional, with projects that focus on everything from supply chain optimization to revenue generation and from predictive maintenance to fraud detection. In many cases, these endeavors have led to companywide adoption of MIT technology, analytics, and algorithms to increase productivity and profits.

    Putting theory to practice, Simchi-Levi and his team worked with a large mining company in Latin America to improve its mining operations. Their algorithm receives data every five seconds from thousands of sensors and predicts product quality 10, 15, and 20 hours prior to product completion. Specifically, they used these data to identify impurities, such as silica level in the finished product, and to suggest corrective strategies to improve quality.

    In the realm of price optimization, Simchi-Levi’s alliance has worked with a number of major online retailers, including Groupon; B2W, Latin America’s largest online retailer; and Rue La La. Rue La La operates in the flash-sale industry, in which online retailers use events to temporarily discount products.

    “But how do you price a product on the website the first time if you have no historical data?” Simchi-Levi asks. “We applied machine learning algorithms to learn from similar products and then optimization algorithms to price products the company never sold before, and the impact was dramatic, increasing revenue by about 11 percent.”

    It’s a deceptively simple answer. But for Simchi-Levi, well known as a visionary thought leader in his field, solving tough problems is at the heart of the work of the Accenture and MIT Alliance in Business Analytics.

    “In the case of Groupon and B2W, we developed a three-step process to optimize and automate pricing decisions,” he says. First, they utilize machine learning to combine internal historical data with external data to create a complete profile of consumer behavior. Second, they post pricing decisions on the website and observe consumer behavior. Third, they learn and improve pricing decisions based on that behavior in order to optimize the final price. “In all of these cases, we made a big impact on the bottom line: increasing revenue, increasing profit, and increasing market share,” he says.

    At any point in time, Simchi-Levi’s business analytics alliance, which has been going strong since 2013, has between 10 and 20 projects running simultaneously. He suggests the reason so many companies are turning to MIT for their business challenges has a lot to do with recent technology trends and the Alliance’s role at the forefront of those developments.

    Specifically, he mentions three technology trends: digitization; automation; and analytics, including the application of machine learning and artificial intelligence algorithms. However, he observes that initially it is difficult for executives to accept that black box analytics can do a better job at pricing a product than the merchants who know the product and have been working in the industry for 25 years. While Simchi-Levi concedes that this is partially true, he notes that with thousands upon thousands of products to price, merchants can focus only on the top 10 percent, whereas MIT’s analytics can achieve the same performance on the top 10 percent, while achieving an equally impressive performance on the middle 50 percent and equally similar performance on the long tail.

    More precisely, “While the company merchant will focus on a small portion, we can focus on the entire company portfolio,” he says. “We’re talking about the ability to use data and analytics to optimize prices for thousands of products.”

    “Business analytics is a very exciting area. If you open any business journal you will see references to data science and data analytics,” Simchi-Levi says. But his expertise has led him to explore a deeper truth about this obsession with data analytics: “My experience is that while there is a lot of excitement around this area, industry actually does very little [in the way of] using data and analytics to automate and improve processes.”

    He says there are three main challenges industry faces in the area of data analytics: data quality, information silos, and internal resistance. “What we do at MIT is bring all of these opportunities together by improving the data quality, convincing executives to start experimenting with some of the technology, and connecting different data sources into an effective platform for analytics.”

    http://news.mit.edu/2018/mit-david-simchi-levi-using-data-analytics-solve-global-business-challenges-0515

  • Electric autonomous vehicles would “cut urban transport costs by 40%”

    March 13, 2018 bustimize Uncategorized

    A report from the World Economic Forum has predicted that autonomous electric vehicles will cut the cost of urban road transport travel by 40%.

    Produced with UK consultant Bain & Company, the report claims cities could “dramatically increase productivity” if they embrace shared, autonomous electric vehicles (EVs), and that the US could realise benefits worth $635bn if it fully realises the potential of new transport systems.

    Cities should prioritise public transport and commercial fleets, as those are the most heavily used types of vehicle, the report advises.

    They should fully electrify public transport and remove regulatory barriers to autonomous vehicles (AVs).

    Electric charging infrastructure should be widely introduced to relieve “range anxiety” among private drivers, and make charging points as green as possible.

    The report lists cities that are leading the way to new urban systems. These include:

    Berlin’s EUREF Campus (pictured), a newly built 5ha business park that hosts technology companies and research institutions has a microgrid that uses artificial intelligence to optimise charging and sends energy surplus back to the grid, based on dynamic pricing.

    Buenos Aires, Montreal and Santiago, Chile, have prioritised the electrification of public transport through the public procurement of electric buses.

    Dortmund is developing non-financial incentives for last-mile delivery companies to electrify their fleets, and EVs have been given extended access to the city centre.

    Guangzhou has speeded up bus electrification and aims to reach 200,000 new units in 2018. China’s government has also announced it will develop national regulations for testing AVs on public roads across the country.

    Hong Kong is encouraging developers to scale-up its EV charging infrastructure. This includes solutions integrated with the smart payment system, Octopus, which is also used to access the public transport network.

    Los Angeles’ Police Department has decided to switch 260 fleet vehicles to EVs. Charging infrastructure development is also under way and being integrated with decentralised solar power generation. By leasing rather than buying vehicles, the LAPD can invest in charging stations, including fast-charging stations in city centre car parks.

    Transport for London requires all new black cabs to be electric or emission-free, and diesel vehicles will not be permitted in London by 2032. A total of 80 charging points will be dedicated to black cabs, with plans to implement 150 by the end of 2018, and 300 by 2020.

    Oslo plans to have its fleet of 1,200 public vehicles using electricity by 2020,and gives EVs priority lanes. A project in Vulkan, on the city’s outskirts, demonstrates a public–private cooperation model between the city, a utility company and a real-estate firm for smart charging stations.

    Paris has partnered with private company Autolib to set up an electric car sharing service with 4,000 EVs and more than 6,200 charging points across the region.

    Image: EUREF, Berlin’s business park for smart mobility companies (Berlin Agency for Electromobility)

    http://www.globalconstructionreview.com/news/electric-autonomous-vehicles-would-cut-urban-trans/

  • The Development of a Performance Indicator to Compare Regularity of Service between Urban Bus Operators

    The Development of a Performance Indicator to Compare Regularity of Service between Urban Bus Operators

    Mr. Mark Trompet from CTS, Imperial College London
    Wednesday, 08 December 2010 – 16:00
    Location: Room 610, Skempton (Civil Eng.) Bldg, Imperial College London

    Abstract
    The work which is presented in this seminar evaluated options for a key performance
    indicator that comparably illustrates differences in performance with regard to
    maintaining service regularity on high frequency routes between urban bus operators.
    The data used for this study was collected by the International Bus Benchmarking Group,
    facilitated by Imperial College London, and relates to twelve medium to large sized
    urban bus operators from different countries. Through two annual rounds of data
    collection, lessons were learned on feasible data characteristics, required sample size
    and data cleaning processes. The following four key performance indicator alternatives
    were tested and their strengths and weaknesses described: ‘Excess Wait Time’,
    ‘Standard deviation of the difference between the scheduled and the actual headway’
    and % of service within a fixed and relative number of minutes from the scheduled
    headway, also referred to as respectively ‘Wait assessment’ and ‘Service regularity’. The
    results suggest that while all four methodologies illustrate a different, interesting view on
    service regularity performance, the Excess Wait Time methodology is the best option
    when the key performance indicator should reflect the customer experience of the
    regularity of service.

    1. Introduction to the Railway and Transport Strategy Centre (RTSC)
    2. Introduction to the Benchmarking Work within the RTSC and in
    specific the International Bus Benchmarking Group (IBBG)
    3. Service Regularity Indicators: Literature and Use by Operators
    4. Sample size, Data Characteristics and Data Cleaning Methodologies
    5. Testing:
    – Excess Wait Time,
    – Standard deviation of the differences between the scheduled and the
    actual headway,
    – Wait Assessment and
    – Service Regularity.
    6. Conclusions

    Download the pdf here

     

  • Teaching Machines to Think Like Humans

    December 22, 2017 bustimize Ios

    A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

    ...
  • Faster big-data analysis

    October 30, 2017 bustimize Web Designer
    System for performing “tensor algebra” offers 100-fold speedups over previous software packages. ...
  • An algorithm for your blind spot

    October 9, 2017 bustimize Ios
    Using smartphone cameras, system for seeing around corners could help with self-driving cars and search-and-rescue. ...