machine learning structural

Machine Learning

For Structural Prototyping

01

Concept

We set out to create a machine learning algorithm that could help to inform structural decisions when designing a shell. The end goal, is to have an algorithm that could decern if a shell would be within structural design constraints, without a feedback loop through Karamba. Karamba is used to train the algorithm, then the machine learning algorithm could predict which shell shapes would perform well in a Karamba analysis.

This project proves that machine learning can be used as a preliminary structural analysis tool. If a project has a known geometry type, Machine Learning can be used to find valid structural morphologies within that type. In leu of a designer individually analyzing each design option, Machine Learning can inform which array of designs are likely to work. This allows a designer to run numerous rapid iterations to generate an array of design possibilities within given structural limits.

x-ray timber building

02

Machine Learning
Model

The model was built as a neural network. The network is layered with two hidden layers using Keras. In order to compile the model we explored three different optimizers. We tested Adam, RMSprop, and SGD compilers. Adam, the initial optimizer, did not offer any obvious training patterns. It did not offer improvements in accuracy. In order to improve the results, we expanded to other optimizers: RMSprop and SGD. Among all these optimizers, RMSprop offered the best training results, and it also had the fastest runtime. So ultimately, our model was trained using RMSprop.

workflow

03

Generating Input Data

We built an array of simple meshes that, hypothetically, would be used in an architectural application. Each mesh starts from a set of 4 random points. These points are used to generate a polygonal surface within a range of variability. This surface is then relaxed into a shell structure using Kangaroo. Each mesh is then structurally analyzed using Karamba. We limited the scope of our analysis to deflection. A mesh’s validity is determined by staying withing deflection limits. The deflection limit is scaled for each mesh depending on the spans of that mesh. The naked boundary curves of the mesh are divided into points. These Point3d coordinates are in reference to the anchor point which is deemed to be the local origin. These Point3d coordinates are then converted into a CSV file such that TensorFlow can interpret the 3d shape of the mesh as a data string.

define boundary curve
sbdivide boundary curve

04

Output Data

The output data that we expect from the model is to validate whether a design is structurally stable or not. Therefore, it just needs to produce a boolean for each mesh that states if it is good or not. Based on the max span of the shell structure, the design needs to be within certain displacement criteria. For this test case, the shell was assumed to be made of concrete. Thus the max deflection needed to be within 1/250. The machine learning algorithm is trained from our input data; then, it is able to determine if a new design is valid or not based on what it has learned from the input data. With this algorithm, a design can quickly check hundreds of designs to see if they are valid before going through arduous engineering.

max span
max displacement

05

Testing Results

For the TensorFlow portion, we used Keras, pandas, and NumPy. We wrote the TensorFlow script in Google Colaboratory, then we transferred the script to GH_CPython for interface with grasshopper. We import the data into TensorFlow as two CSV files. This data includes Point3d and a Boolean determining the validity of the structural performance. The CSV file of Point3d coordinates was imported as a data-frame. This data-frame was converted to an array using NumPy. Then, the array was resized, using NumPY, into separate arrays that correspond to each mesh. Then, the array was split into training and test data, which are exclusive from each other. The ratio of this split was an area of investigation. We tried several split ratio permutations: 80-20, 75-25, 90-10. The next area of optimization is to increase the size of the data set. We generated a myriad of meshes, each mesh reflecting one item in our data set. Once we generated roughly 200 meshes, the model become too heavy and too arduous with Kangaroo relaxation. This became the upper threshold of our data set. We ultimately chose a data set of 203 items. This can be improved.

06

Outlook

This method could be improved by increasing the data size, and thus increasing the accuracy of the model. This would be an innvaluable step if this method were to be implemented in an industry scenario.

Further, for this tool to be more robust, there should be an increased variability of the input data. This would ensure that the model is adaptable to different scenarios. As it stands, the model is relatively hermedic, and self contained.

This would also be a more practical tool if it moved from a classification to a generative model. This could present the designer with an array of viable morphologies given the input boundary conditions.

structural classification