Been working on a solution to remove odd query strings from pages after search engines keep adding them at random. I have a solution that semi works, but I would ideally like to see it clear them all. Right now the code I have is this: <IfModule mod_rewrite.c> RewriteEngine On RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{QUERY_STRING} ^(.*)&?sort=[^&]+&?(.*)$ [NC] RewriteRule …
So, I just finished a 48 hr datathon, and I did terribly, to be honest. It was my first datathon. We were given a list of datasets: 5 months of taxi demand data (January to May) Weather dataset Zone neighbors dt (date and time of prediction) And we were told to build a time series forecasting model to forecast the taxi demand. We were told to do it in a forecasting manner, like, Train with January and Test with February, …
I am trying to do a #tribe_events_event_options{display:none;} for all the author users on my site. I don't want them to be able to set an event to featured, and I thought the simplest way was to not display the box. Is there an easy way to target user groups for custom css? I've made a author.css file in the child theme folder. I found this post, but It did not do it for me. Custom CSS In Admin Only For …
Considering Bayesian posterior inference, which distribution does Monte Carlo sampling take samples from: posterior or prior? Posterior is intractable because the denominator (evidence) is an integration over infinite theta values. So, if Monte Carlo samples from posterior distribution, I am confused as to how the posterior distribution is known as it is intractable. Could someone please explain me what I am missing? If Monte Carlo samples from prior distribution, how does the samples approximate to posterior distribution?
I am trying to find a solution to a layout I have been tasked to build using 'Gutenberg' blocks. As far as I am aware there can only be one the_content(); per page. The layout I am trying to achieve can be found below: https://jsbin.com/vegotugayo/edit?html,output The issue that I have is the breakout area for the quotes. These need to be transparent so the fixed image in the background can be seen. But from what I can work out, this …
I am training a NER model to detect mentioned phrases and slang words in a bias study conducted on court cases. Essentially, I have packets of text that I scanned and these are the complete proceedings. The model is great at detecting the phrases I want based on annotations that I have created from the many cases that I have already scanned. However, I am facing false positives for certain phrases. Here is an example of a phrase I want …
I have a client that changed jobs. He wants to redirect all pages on the wordpress site to a specific page on that site. But he also wants to keep all the SEO from the blog posts. So I need all the blog posts to show up, but redirect any old pages to the new page. So, for example: oldpage.com > oldpage.com/specificpage so anything that is not a blog post goes to oldpage.com/specificpage the blogpost would go to blogpost I …
To speed up my website in China I thought of hiding blocked services from Chinese users and wrote this and put it in functions.php $isInChina = false; $ip = $_SERVER['REMOTE_ADDR']; // This will contain the ip of the request // This service tells me where the IP address is from and gives me more data than I need. $userData = json_decode(file_get_contents("http://www.geoplugin.net/json.gp?ip=".$ip)); if (is_Null($userData) || empty($userData) || $userData->geoplugin_countryCode == "CN") { $isInChina = true; // Count no data as in China …
I have the following 3 columns in my dataset: 1.month, 2.day_of_week, 3.quantity. I would like to predict the future values of quantity, having following variables as explanatory: One-hot encoding of month (12 variables). One-hot encoding of day_of_week (7 variables). The last 2 lags of quantity (2 variables). Could such an analysis be supported by an LSTM model? I believe I have managed to create an LSTM model which takes the 2 lags as explanatory, but I have no idea how …
I am using ACF. I am using this function to display numbers in number fields in this way 1,000.00 // Return ACF Number Fields Formatted with Commas on the Frontend add_filter('acf/format_value/type=number', 'acf_number_comma_decimal', 20, 3); // Without Decimal function acf_number_comma($value, $post_id, $field) { $value = number_format(floatval($value)); return $value; } // With Decimal function acf_number_comma_decimal($value, $post_id, $field) { $value = number_format(floatval($value), 2); return $value; } I would like to replace the ” , ” with this ” ‘ ” so the value …
I'm trying to move from NumPy array as my dataset to tensorflow.Dataset. Now, I've created a pipeline to train the model for classification problems. At some point, I just normalize all the images using map function: dataset['train'] = dataset['train'].map(pre_pr, num_parallel_calls=tf.data.experimental.AUTOTUNE) And the function description looks like this: @tf.function def normalize(input_image: tf.Tensor, input_mask: tf.Tensor) -> tuple: input_image = tf.cast(input_image, tf.float32) / 255.0 input_mask= tf.cast(input_mask, tf.float32) / 255.0 return input_image, input_mask @tf.function def pre_pr(datapoint: dict) -> tuple: input_image = tf.image.resize(datapoint['image'], (IMG_SIZE, IMG_SIZE)) …
I was reading Modern Optimization with R (Use R!) and wondering if a book like this exists in Python too? To be precise something that covers stochastic gradient descent and other advanced optimization techniques. Many thanks!
I'm using Tensorflow's SSD Mobilenet V2 object detection code and am so far disappointed by the results I've gotten. I'm hoping that somebody can take a look at what I've done so far and suggest how I might improve the results: Dataset I'm training on two classes (from OIV5) containing 2352 instances of "Lemon" and 2009 instances of "Cheese". I have read in several places that "state of the art" results can be achieved with a few thousand instances. Train …
As a developer, I'd really like to have the ability to see every tag written out on the editor in source mode. Is there a way? Some filter, action, or so?
I have a deep learning model which has to be feed with a huge amount of data (200k of 256x256 images), so it runs out of memory. I have divided my data in several numpy arrays that are in an specified directory, but I do not know exactly how to create the bacth generator from diferent numpy arrays so all the numpy work as X_train,and then it is load to the model in batches. I tried coding the following lines …
I have a model that outputs 0 or 1 for interest/not-interest in a job. I'm doing an A/B/C test comparing two models (treatment groups) and none (control group). ANOVA for hypothesis testing and t-test with Bonferroni correction for posthoc testing is my plan. But both tests assume normality. Can we have normality for 0 and 1? If so, how? If not, what's the best test (including posthoc)?
I am currently trying to get a better understanding of regularization as a concept. This leads me to the following question: Will regularization change when we change the loss function? Is it correct that this is the sole way that these concepts are related?
I'm trying to filter the products on my shop page based on stock level by adding a condition to the meta query. For variable products I get zero results, but simple products work. I'm running the following in the woocommerce_product_query hook: $metaQuery = $q->get('meta_query'); $metaQuery[] = array( 'key' => '_stock', 'value' => $quantity, 'compare' => '>=' ); $q->set( 'meta_query', $metaQuery ); I tried explicitly adding variations to the query, but it didn't seem to make a difference: $q->set( 'post_type', array('product', …
I am thinking about using hierarchical dirichlet process to model a patent dataset. I've seen that HDP uses a base distribution and assumes that every topic comes from that base distribution. The problem is: first I'm wondering what are the main results from the HDP procedure (in the case of LDA we obtain two matrices that we can use to construct word clouds and graphs but in this case I'm not sure about the results) and what is the exact …
I have a Wordpress site set up with download links that either could be set as external URLs or links to media items uploaded to Wordpress. I would like to password protect these, so clicking a download link would prompt you with an input field for a password, and when entering the correct password, you would be redirected to the external URL or the WP media URL. This should not be connected to user authentication, so you should be able …
My dataset consists of a large (1000s+) number of individuals, who may be considered independent of each other. Each individual has a timeseries of about 10-60 data points (each point being a vector of 8 predictors), and a matched timeseries of outcomes of the same length (one value per time point). I want to use LSTM to learn how the historical pattern of that individual predicts the outcome, but I want to do that at EVERY timestep - not just …
I am dealing with a data set in which I have to classify between a diseased and a non-diseased individual. I was wondering if it is possible to adapt the MARS regression (Multivariate adaptive regression spline) to use it for classification tasks. Thank you very much and best regards.
I want to use Data Mining/Machine Learning for a problem and I'm not sure if there is a standard algorithm for my problem. The problem is as follows: There is a set of Events and a set of Potential Triggers. Each trigger can give rise to none, one or several Events. I want to classify Potential Triggers based on their features into ones that do not cause any event and ones that do. So far this is a standard classification …
I have a plugin used on a parent theme which uses shortcode. The plugin (shortcode) works on the parent theme but when I switch to child theme it no longer works. I've only added the child theme code in the child theme function... this is the only script currently in child function. function prpin_scripts_child_theme_scripts() { wp_enqueue_style( 'parent-theme-css', get_template_directory_uri() . '/style.css' ); } add_action( 'wp_enqueue_scripts', 'prpin_scripts_child_theme_scripts' ); I thought functions like that are inherited from parent. Any advice?
I have a form submit event that calls a WordPress ajax function. The form submit event occurs only once (It is correct as expected). But the ajax function loaded TWICE (expected to occur once). Here is a sample of my code. #Jquery $('#my_form').submit(function () { console.log("This log print ONCE"); var name = $('#name').val(); var email = $('#email').val(); $.ajax({ type: 'post', url: '/wp-admin/admin-ajax.php', data: { action: 'my_ajax_function', name: name, email: email, }, dataType: "json", success: function (data) { $('#response').html(data.message); } }); …
I am new to reinforcement learning and i am trying to understand more on how to apply multi armed bandit in real world cases. So here is my scenario, as i'm new on this i'm starting on small cases :- We have 5 items to recommend from set of 20 items, currently we are using CTR(Click through Rate) to select top 5 items and display the same. The problem here we saw was with cold start and exploration on other …
I am using BERT model for sentence similarity task. However my dataset with sentence is very specific and I want to fine tune my model on it first. My dataset is unlabelled. And BERT model that I want to use was trained with natural language inference method (where sentences are labeled as neutral, entailment or contradiction. I found articles about fine tuning BERT model with MLM method, but I don't think that I can apply to my model since it …
I'm developing a tab system on my site and I want to build the contents of the tabs using ACF blocks. I have a "Tabs" block which handles the titles of the tabs. I have linked the index number of the tabs via JS and CSS to show and hide content on the page. I'm currently using a ACF field to pick the tab the current block is shown in. So there is a field "Choose the tab where this …
I have two taxonomy and a post type. Post type: Employee (employee) Taxonomy: Location (term example chicago, id 1) Taxonomy: Job Title (job-title) I want to list all job title based on a particular location. If CPT not available on that location the job title shouldn't be added. I have tried to use WP_Term_Query() and wp_terms() but haven't found a way.
I am building a 3 neural network models on dataset that is already separated to train and test sets. From my analysis, I found that this dataset has values on test set which don't exist in the train set. And this gives a certain limitation or maximum capacity to my neural network model(s). By this I mean, I can not seem to improve the accuracy even if I change the hyper parameters or the parameters of my models. I have …
I have the following keras model: import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential() layer_in = keras.Input(shape=(256)) layer1 = layers.Dense(2, activation="relu", name="layer1") layer2 = layers.Dense(3, activation="relu", name="layer2") layer3 = layers.Dense(4, name="layer3") model.add(layer_in) model.add(layer1) model.add(layer2) model.add(layer3) model.build() Which produces the following when keras.summary() is called Model: "sequential_8" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= layer1 (Dense) (None, 2) 514 layer2 (Dense) (None, 3) 9 layer3 (Dense) (None, 4) 16 ================================================================= Total params: 539 …
I am trying to change a CSS Variable value in customizer and couldn't get live preview working using 'postMessage'. It works if I use 'refresh' option. Can someone please have a look at the code and point me to the right direction. Thanks. customizer.php code /** * Registers settings and controls with the Customizer. * * @param WP_Customize_Manager $wp_customize Customizer object. */ function mytheme_customize_register( $wp_customize ) { $wp_customize->add_setting( 'primary_color', [ 'default' => '#b3000e', 'sanitize_callback' => 'sanitize_hex_color', 'transport' => 'postMessage', ] …
My question is really simple. I know the theory behind gradient descent and parameter updates, what I really haven't found clarity on is that is the loss value (e.g., MSE value) used, i.e., multiplied at the start when we do the backpropagation for gradient descent (e.g., multiplying MSE loss value with 1 then doing backprop, as at the start of backprop we start with the value 1, i.e., derivative of x w.r.t x is 1)? If loss value isn't used …
This is my first question, Hello World I guess. I need to create a conv2D custom layer (at least, I think so), which should use my custom module for extracting values in the first layer. It would be something like this: model.add(CustomConv2D( 128? ,16, padding='valid',strides=16, input_shape=(128, 128, 1))) So, the thing is, my module looks something like this --> CustomModule.stuff(image) -> This returns an np array with size 8. I would like to pass that custom stuff for every $16*16$ …
I'd like to create a 3-tier hierarchical list of terms within a custom taxonomy, which includes up to 10 posts within each of the bottom-level term(s) only. e.g. Custom Taxonomy - Parent Term 1 -- Child Term 1 --- Post 1 --- Post 2 -- Child Term 2 --- Post 1 --- Post 2 --- Post 3 --- Post 4 -- Child Term 3 etc I'd like eventually have these display in a set of 'spoilers' or in an 'accordion' …