SpaceX Launches Falcon 9 Rocket With Record 143 Spacecraft in Cosmic Rideshare Program

A veteran rocket from billionaire entrepreneur Elon Musk’s SpaceX aerospace company launched 143 spacecraft into space on Sunday, a new record for the most spaceships deployed on a single mission, according to the company.

The Falcon 9 rocket lifted off at 10am EST (8:30pm IST) from the Space Launch Complex 40 at Cape Canaveral Space Force Station in Florida. It flew south along the eastern coast of Florida on its way to space, the company said.

The reusable rocket ferried 133 commercial and government spacecraft and 10 Starlink satellites to space – part of the company’s SmallSat Rideshare Program, which provides access to space for small satellite operators seeking a reliable, affordable ride to orbit, according to the company.

SpaceX delayed the launch one day because of unfavourable weather. On January 22 Musk, also chief executive of Tesla wrote on Twitter: “Launching many small satellites for a wide range of customers tomorrow. Excited about offering low-cost access to orbit for small companies!”

SpaceX has previously launched to orbit more than 800 satellites of the several thousand needed to offer broadband Internet globally, a $10 billion (roughly Rs. 72,900 crores) investment it estimates could generate $30 billion (roughly Rs. 

ArtEmis: Affective Language for Visual Art

Most of the annotation datasets in computer vision focus on objective and content-based applications. A recent paper on arXiv.org investigates an underexplored problem of the relationship between visual content and its emotional effect expressed through language.

Image credit: Merry Steward via pixy.org, CC0 Public Domain

A dataset of emotional reactions to visual artwork in natural language is collected. The annotators expressed moods, feelings, personal attitudes, and abstract concepts like freedom. Psychological interpretations were explained and linked with visual attributes.

Some of the examples even include metaphorical descriptions relative to subjective experience (like ‘it reminds me of my grandmother’). Further potential is demonstrated by creating neural speakers trained with the dataset. Some of the speakers were able to produce grounded visual explanations and fared reasonably well in the Turing test.

We present a novel large-scale dataset and accompanying machine learning models aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language. In contrast to most existing annotation datasets in computer vision, we focus on the affective experience triggered by visual artworks and ask the annotators to indicate the dominant emotion they feel for a given image and, crucially,